Open sleeepeer opened 1 year ago
We are currently not planning to incorporate the fairseq model in the released code, since fairseq models are much earlier (and worse) versions of subsequent models like OPT and LLAMA. However, adding OPT or LLAMA would be pretty straightforward. After you git clone the MetaICL codebase as the README suggests,
Thank you! In fact, I've tried to use the GPT-J-6B and OPT-6.7B but there're something strange.
Here is the result of the MetaICL model on glue-sst2_0_correct:
06/01/2023 08:54:46 - INFO - __main__ - out/metaicl+direct+glue-sst2+True+0_correct/glue-sst2_0_correct-test-direct-k=16-s=87.pkl
06/01/2023 08:54:46 - INFO - __main__ - torch.Size([1744, 1024])
06/01/2023 08:57:06 - INFO - __main__ - Accuracy=0.9345392765657616
06/01/2023 08:57:06 - INFO - __main__ - Macro-F1 of None over 1 target tasks: 93.5
And then the OPT-6.7B (gold demonstration got these low accuracy as well):
06/04/2023 03:39:12 - INFO - __main__ - out/facebook/opt-6.7b+direct+glue-sst2+True+gold/glue-sst2-test-direct-k=16-s=87.pkl
06/04/2023 03:39:12 - INFO - __main__ - torch.Size([1744, 1024])
06/04/2023 03:46:05 - INFO - __main__ - Accuracy=0.33738601823708203
06/04/2023 03:46:05 - INFO - __main__ - Macro-F1 of None over 1 target tasks: 34.2
The results of GPT-J-6B are very unstable:
05/29/2023 03:02:19 - INFO - __main__ - Accuracy=0.5045671080809405
05/29/2023 03:09:34 - INFO - main - Accuracy=0.9082395033147426
05/29/2023 03:16:49 - INFO - main - Accuracy=0.8974978042224966
05/29/2023 03:24:04 - INFO - main - Accuracy=0.5204062820022217
05/29/2023 03:31:19 - INFO - main - Accuracy=0.8740684112412042 05/29/2023 03:31:19 - INFO - main - Macro-F1 of None over 1 target tasks: 74.1
I changed the random seed several times but I still got similar results.
My only modification is to use this line to set fp16, but the results of MetaICL model were not affected by this (still 0.93).
```python
torch.set_default_dtype(torch.float16)
So I'm wondering why the results of GPT-J and OPT are so strange. Is it because they are from HuggingFace?
Looking for your reply!
Hi sleeepeer,
It seems like the repo author is not replying. Can you help me with the repo?
I am at the Preparation stage of this repo. I did git clone this repo, and then cd to the repo, then I did git remote add metaicl https://github.com/facebookresearch/MetaICL.git
. Then when I run git pull metaicl main
I got this:
[conda] (base) [lh599@corfu:rethinking-demonstrations]$ git pull metaicl main
warning: no common commits
remote: Enumerating objects: 480, done.
remote: Counting objects: 100% (233/233), done.
remote: Compressing objects: 100% (109/109), done.
remote: Total 480 (delta 219), reused 124 (delta 124), pack-reused 247
Receiving objects: 100% (480/480), 485.62 KiB | 39.00 KiB/s, done.
Resolving deltas: 100% (308/308), done.
From https://github.com/facebookresearch/MetaICL
* branch main -> FETCH_HEAD
* [new branch] main -> metaicl/main
fatal: refusing to merge unrelated histories
Did you face any issue like this before?
Hi sleeepeer, It seems like the repo author is not replying. Can you help me with the repo? I am at the Preparation stage of this repo. I did git clone this repo, and then cd to the repo, then I did
git remote add metaicl https://github.com/facebookresearch/MetaICL.git
. Then when I rungit pull metaicl main
I got this:[conda] (base) [lh599@corfu:rethinking-demonstrations]$ git pull metaicl main warning: no common commits remote: Enumerating objects: 480, done. remote: Counting objects: 100% (233/233), done. remote: Compressing objects: 100% (109/109), done. remote: Total 480 (delta 219), reused 124 (delta 124), pack-reused 247 Receiving objects: 100% (480/480), 485.62 KiB | 39.00 KiB/s, done. Resolving deltas: 100% (308/308), done. From https://github.com/facebookresearch/MetaICL * branch main -> FETCH_HEAD * [new branch] main -> metaicl/main fatal: refusing to merge unrelated histories
Did you face any issue like this before?
Please refer to this issue.
Thank you! In fact, I've tried to use the GPT-J-6B and OPT-6.7B but there're something strange.
- Here is the result of the MetaICL model on glue-sst2_0_correct:
06/01/2023 08:54:46 - INFO - __main__ - out/metaicl+direct+glue-sst2+True+0_correct/glue-sst2_0_correct-test-direct-k=16-s=87.pkl 06/01/2023 08:54:46 - INFO - __main__ - torch.Size([1744, 1024]) 06/01/2023 08:57:06 - INFO - __main__ - Accuracy=0.9345392765657616 06/01/2023 08:57:06 - INFO - __main__ - Macro-F1 of None over 1 target tasks: 93.5
- And then the OPT-6.7B (gold demonstration got these low accuracy as well):
06/04/2023 03:39:12 - INFO - __main__ - out/facebook/opt-6.7b+direct+glue-sst2+True+gold/glue-sst2-test-direct-k=16-s=87.pkl 06/04/2023 03:39:12 - INFO - __main__ - torch.Size([1744, 1024]) 06/04/2023 03:46:05 - INFO - __main__ - Accuracy=0.33738601823708203 06/04/2023 03:46:05 - INFO - __main__ - Macro-F1 of None over 1 target tasks: 34.2
- The results of GPT-J-6B are very unstable:
05/29/2023 03:02:19 - INFO - __main__ - Accuracy=0.5045671080809405 05/29/2023 03:09:34 - INFO - __main__ - Accuracy=0.9082395033147426 05/29/2023 03:16:49 - INFO - __main__ - Accuracy=0.8974978042224966 05/29/2023 03:24:04 - INFO - __main__ - Accuracy=0.5204062820022217 05/29/2023 03:31:19 - INFO - __main__ - Accuracy=0.8740684112412042 05/29/2023 03:31:19 - INFO - __main__ - Macro-F1 of None over 1 target tasks: 74.1
I changed the random seed several times but I still got similar results. My only modification is to use this line to set fp16, but the results of MetaICL model were not affected by this (still 0.93).
torch.set_default_dtype(torch.float16)
So I'm wondering why the results of GPT-J and OPT are so strange. Is it because they are from HuggingFace?
Looking for your reply!
Note that we apply specific prompt template for each of the datasets we reported in the paper. See Appendix B in the paper or refer to this file. glue-sst2
is not one of our supported datasets, which might result in unexpected input to the model here.
Language models are very sensitive to the choice of prompts, including format and demonstration examples. A ill-formatted prompt can cause low and unstable performance that you are observing. Try adding your own template for the dataset you want to experiment with.
RE HuggingFace models: we consider HuggingFace to be a popular and reliable source for model checkpoints, tokenizer, etcs. Therefore, it's unlikely that HuggingFace models cause the results that your see.
We are currently not planning to incorporate the fairseq model in the released code, since fairseq models are much earlier (and worse) versions of subsequent models like OPT and LLAMA. However, adding OPT or LLAMA would be pretty straightforward. After you git clone the MetaICL codebase as the README suggests,
- Modify these lines to load OPT or LLAMA models.
- Modify these lines to use OPT tokenizer or LLAMA tokenizer instead of GPT-2 tokenizer.
Hi,
I am planning to run the code on llama3, and I followed your instruction modified model.py
and test.py
.
Here is model.py
load function right now:
def load(self, checkpoint=None, gpt2="gpt2-large"):
'''
checkpoint can be either keyword of the model or path to the checkpoint file
'''
if gpt2 == "llama3":
model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3-8B")
if checkpoint is not None and checkpoint.startswith("gpt"):
gpt2 = checkpoint
checkpoint = None
if checkpoint is None and "gpt" not in gpt2:
checkpoint = gpt2
gpt2 = "gpt2-large"
if checkpoint is None:
if gpt2.startswith("gpt2"):
model = AutoModelForCausalLM.from_pretrained(gpt2)
elif "gpt-j" in gpt2:
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B") #/gpt2)
else:
raise NotImplementedError(checkpoint)
self.model_name = gpt2
else:
self.model_name = checkpoint
_id = get_checkpoint_id(checkpoint)
if _id is not None:
method, setting, _id = _id
keyword = checkpoint
checkpoint = os.path.join("checkpoints", method, setting)
if self.local_rank <= 0:
if os.path.exists(checkpoint):
self.logger.info("Reusing checkpoint at %s" % checkpoint)
else:
self.logger.info("Downloading %s in %s", keyword, checkpoint)
download_file(_id, checkpoint)
assert os.path.exists(checkpoint), checkpoint
if self.local_rank <= 0:
self.logger.info("Loading the model from %s" % checkpoint)
state_dict = torch.load(checkpoint)
model = AutoModelForCausalLM.from_pretrained(gpt2, state_dict=state_dict)
self.model = model
Here is test.py
main function right now:
def main(logger, args):
assert (args.dataset is not None and args.task is None) or (args.dataset is None and args.task is not None)
if args.gpt2 == "llama3":
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B")
elif args.gpt2.startswith("gpt2"):
tokenizer = GPT2Tokenizer.from_pretrained(args.gpt2)
else:
tokenizer = AutoTokenizer.from_pretrained("gpt2")
add_newlines = True
...
The command I am running is CUDA_VISIBLE_DEVICES=2 python test.py --dataset glue-mrpc --gpt2 llama3 --method channel --out_dir out/channel-metaicl --do_zeroshot --use_demonstrations --k 4 --seed 100,13,21,42,87 --test_batch_size 8
. However, I got this error:
(metaicl2) [conda] [lh599@corfu:rethinking-demonstrations]$ CUDA_VISIBLE_DEVICES=2 python test.py --dataset glue-mrpc --gpt2 llama3 --method channel --out_dir out/channel-metaicl --do_zeroshot --use_demonstrations --k 4 --seed 100,13,21,42,87 --test_batch_size 8
07/25/2024 23:28:09 - INFO - __main__ - Namespace(checkpoint=None, dataset='glue-mrpc', do_zeroshot=True, global_step=None, gpt2='llama3', is_null=False, k=4, log_file=None, method='channel', out_dir='out/channel-metaicl', seed='100,13,21,42,87', split='test', task=None, test_batch_size=8, unseen_domain_only=False, use_calibration=False, use_demonstrations=True, use_random_english_words=False)
401 Client Error: Unauthorized for url: https://huggingface.co/meta-llama/Meta-Llama-3-8B/resolve/main/config.json
Traceback (most recent call last):
File "/research/cbim/medical/lh599/research/ruijiang/miniconda/envs/metaicl2/lib/python3.8/site-packages/transformers/configuration_utils.py", line 537, in get_config_dict
resolved_config_file = cached_path(
File "/research/cbim/medical/lh599/research/ruijiang/miniconda/envs/metaicl2/lib/python3.8/site-packages/transformers/file_utils.py", line 1400, in cached_path
output_path = get_from_cache(
File "/research/cbim/medical/lh599/research/ruijiang/miniconda/envs/metaicl2/lib/python3.8/site-packages/transformers/file_utils.py", line 1572, in get_from_cache
r.raise_for_status()
File "/research/cbim/medical/lh599/research/ruijiang/miniconda/envs/metaicl2/lib/python3.8/site-packages/requests/models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/meta-llama/Meta-Llama-3-8B/resolve/main/config.json
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test.py", line 276, in <module>
main(logger, args)
File "test.py", line 33, in main
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B")
File "/research/cbim/medical/lh599/research/ruijiang/miniconda/envs/metaicl2/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 435, in from_pretrained
config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/research/cbim/medical/lh599/research/ruijiang/miniconda/envs/metaicl2/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 523, in from_pretrained
config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/research/cbim/medical/lh599/research/ruijiang/miniconda/envs/metaicl2/lib/python3.8/site-packages/transformers/configuration_utils.py", line 561, in get_config_dict
raise EnvironmentError(msg)
OSError: Can't load config for 'meta-llama/Meta-Llama-3-8B'. Make sure that:
- 'meta-llama/Meta-Llama-3-8B' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'meta-llama/Meta-Llama-3-8B' is the correct path to a directory containing a config.json file
Hi there, can you make sure you have signed Llama3's agreement on huggingface?
Hi there, can you make sure you have signed Llama3's agreement on huggingface?
Yes, I have granted access to the model
Hi, I upgraded my transformers to version 4.43.2
, and it run without error. However, the code only run for 5 seconds, and it returned acc of 0.0.
Here is how my terminal looks like:
(metaicl2) [conda] [lh599@corfu:rethinking-demonstrations]$ CUDA_VISIBLE_DEVICES=2 python test.py --dataset glue-mrpc --gpt2 llama3 --method channel --out_dir out/channel-metaicl --do_zeroshot --use_demonstrations --k 4 --seed 100,13,21,42,87 --test_batch_size 8
07/25/2024 23:49:21 - INFO - __main__ - Namespace(checkpoint=None, dataset='glue-mrpc', do_zeroshot=True, global_step=None, gpt2='llama3', is_null=False, k=4, log_file=None, method='channel', out_dir='out/channel-metaicl', seed='100,13,21,42,87', split='test', task=None, test_batch_size=8, unseen_domain_only=False, use_calibration=False, use_demonstrations=True, use_random_english_words=False)
07/25/2024 23:49:21 - INFO - __main__ - Setting up for local_rank=-1, world_size=1
07/25/2024 23:49:21 - INFO - __main__ - batch_size=8 max_length=1024 max_length_per_example=256
07/25/2024 23:49:21 - INFO - __main__ - [Train] glue-mrpc 4
07/25/2024 23:49:21 - INFO - __main__ - [Dev] glue-mrpc 408
07/25/2024 23:49:21 - INFO - __main__ - channel on None (1 train, 1 dev)
07/25/2024 23:49:21 - INFO - __main__ - Checking the first example...
Input:
<|begin_of_text|>not_equivalent<|begin_of_text|>
sentence 1: In court papers filed Tuesday, Lee asked for an injunction against Viacom's use of the name, saying he had never given his consent for it to be used. [SEP] sentence 2: In papers filed Tuesday in Manhattan's state Supreme Court, Lee asked for an injunction against Viacom's use of the name Spike for TNN.<|begin_of_text|>
not_equivalent<|begin_of_text|>
sentence 1: Lt. Scotty Smither, a county firefighter, was struck by lightning. [SEP] sentence 2: A county firefighter, was struck by lightning and was in stable condition at Frankfort Regional Medical Center.<|begin_of_text|>
equivalent<|begin_of_text|>
sentence 1: Before the blast, Wells told police he had been forced to rob the bank and asked police to help him remove the bomb. [SEP] sentence 2: Wells, 46, said he was forced to rob the bank and asked police to help take the bomb off.<|begin_of_text|>
equivalent<|begin_of_text|>
sentence 1: However, the standards body warns that changes to Internet Explorer may affect a " large number " of existing Web sites. [SEP] sentence 2: Still, changes to IE " may affect a large number of existing Web pages, " according to the W3C's notice.<|begin_of_text|>
equivalent
Output:
<|begin_of_text|>
sentence 1: He said the foodservice pie business doesn 't fit the company's long-term growth strategy. [SEP] sentence 2: " The foodservice pie business does not fit our long-term growth strategy.
07/25/2024 23:49:21 - INFO - __main__ - out/channel-metaicl/glue-mrpc-test-channel-k=4-s=100.pkl
07/25/2024 23:49:21 - INFO - __main__ - [Train] glue-mrpc 4
07/25/2024 23:49:21 - INFO - __main__ - [Dev] glue-mrpc 408
07/25/2024 23:49:21 - INFO - __main__ - channel on None (1 train, 1 dev)
07/25/2024 23:49:22 - INFO - __main__ - Checking the first example...
Input:
<|begin_of_text|>equivalent<|begin_of_text|>
sentence 1: The court's 1992 decision reaffirmed the basic findings of Roe protecting abortion choice but lessened the standards of protection guaranteed to women by Roe. [SEP] sentence 2: In a 1992 case, the Supreme Court reaffirmed the basic findings of Roe protecting abortion choice, but lessened the standards of protection guaranteed to women by Roe.<|begin_of_text|>
equivalent<|begin_of_text|>
sentence 1: Several cities are competing for the headquarters, including Miami ; Panama City ; Atlanta ; Port-of-Spain, Trinidad ; and Puebla, Mexico. [SEP] sentence 2: But Miami is competing with eight other cities, including Atlanta ; Panama City ; Port-of-Spain, Trinidad ; and Cancn, Mexico.<|begin_of_text|>
equivalent<|begin_of_text|>
sentence 1: Feral's group was behind a successful tourism boycott about a decade ago that resulted in then-Gov. Walter J. Hickel imposing a moratorium on wolf control in 1992. [SEP] sentence 2: Friends of Animals, which touts 200,000 members, was behind a successful tourism boycott that resulted in then-Gov. Walter J. Hickel imposing a moratorium on wolf control in 1992.<|begin_of_text|>
equivalent<|begin_of_text|>
sentence 1: After all, China isn 't racing anyoneso there's no great rush, " Clark said. [SEP] sentence 2: After all, China isn ’ t racing anyone … so there ’ s no great rush, ” Clark said.<|begin_of_text|>
equivalent
Output:
<|begin_of_text|>
sentence 1: He said the foodservice pie business doesn 't fit the company's long-term growth strategy. [SEP] sentence 2: " The foodservice pie business does not fit our long-term growth strategy.
07/25/2024 23:49:22 - INFO - __main__ - out/channel-metaicl/glue-mrpc-test-channel-k=4-s=13.pkl
07/25/2024 23:49:22 - INFO - __main__ - [Train] glue-mrpc 4
07/25/2024 23:49:22 - INFO - __main__ - [Dev] glue-mrpc 408
07/25/2024 23:49:22 - INFO - __main__ - channel on None (1 train, 1 dev)
07/25/2024 23:49:22 - INFO - __main__ - Checking the first example...
Input:
<|begin_of_text|>equivalent<|begin_of_text|>
sentence 1: The national denomination of the Episcopal Church, with 2.3 million members, is the U.S. branch of the 77 million-member Anglican Communion. [SEP] sentence 2: The Episcopal Church, with 2.3 million members, is the American branch of the worldwide Anglican Communion, which has 77 million adherents.<|begin_of_text|>
equivalent<|begin_of_text|>
sentence 1: With all precincts reporting, Fletcher — a three-term congressman from Lexington — had an overwhelming 57 percent of the vote. [SEP] sentence 2: With all precincts reporting, Fletcher had 88,747 votes, or 57 percent of the total.<|begin_of_text|>
equivalent<|begin_of_text|>
sentence 1: Handset market share for the second quarter, it said, is higher than the first quarter. [SEP] sentence 2: Nokia's market share for the second quarter is estimated to be higher than the first quarter, 2003. "<|begin_of_text|>
not_equivalent<|begin_of_text|>
sentence 1: The government identified the alleged hijackers as Francisco Lamas Carón, 29, Luis Alberto Suarez Acosta, 22, and Yosvani Martínez Acosta, 27. [SEP] sentence 2: The ministry said the hijackers Francisco Lamas Caron, 29 ; Luis Alberto Suarez Acosta, 22 ; and Yosvani Martinez Acosta, 27, shot themselves for unknown reasons.<|begin_of_text|>
equivalent
Output:
<|begin_of_text|>
sentence 1: He said the foodservice pie business doesn 't fit the company's long-term growth strategy. [SEP] sentence 2: " The foodservice pie business does not fit our long-term growth strategy.
07/25/2024 23:49:22 - INFO - __main__ - out/channel-metaicl/glue-mrpc-test-channel-k=4-s=21.pkl
07/25/2024 23:49:22 - INFO - __main__ - [Train] glue-mrpc 4
07/25/2024 23:49:22 - INFO - __main__ - [Dev] glue-mrpc 408
07/25/2024 23:49:22 - INFO - __main__ - channel on None (1 train, 1 dev)
07/25/2024 23:49:22 - INFO - __main__ - Checking the first example...
Input:
<|begin_of_text|>equivalent<|begin_of_text|>
sentence 1: Tibco has used the Rendezvous name since 1994 for several of its technology products, according to the Palo Alto, California company. [SEP] sentence 2: Tibco has used the Rendezvous name since 1994 for several of its technology products, it said.<|begin_of_text|>
not_equivalent<|begin_of_text|>
sentence 1: Most of the alleged spammers engaged in fraudulent or deceptive practices, said Brad Smith, Microsoft's senior VP and general counsel. [SEP] sentence 2: " Spam knows no borders, " said Brad Smith, Microsoft's senior vice-president and general counsel.<|begin_of_text|>
equivalent<|begin_of_text|>
sentence 1: Yesterday, Taiwan reported 35 new infections, bringing the total number of cases to 418. [SEP] sentence 2: The island reported another 35 probable cases yesterday, taking its total to 418.<|begin_of_text|>
equivalent<|begin_of_text|>
sentence 1: A month ago, the Commerce Department estimated that GDP had grown at a 7.2 percent rate in the third quarter. [SEP] sentence 2: A month ago, the Commerce Department said GDP grew at a 7.2 percent rate.<|begin_of_text|>
equivalent
Output:
<|begin_of_text|>
sentence 1: He said the foodservice pie business doesn 't fit the company's long-term growth strategy. [SEP] sentence 2: " The foodservice pie business does not fit our long-term growth strategy.
07/25/2024 23:49:22 - INFO - __main__ - out/channel-metaicl/glue-mrpc-test-channel-k=4-s=42.pkl
07/25/2024 23:49:22 - INFO - __main__ - [Train] glue-mrpc 4
07/25/2024 23:49:22 - INFO - __main__ - [Dev] glue-mrpc 408
07/25/2024 23:49:22 - INFO - __main__ - channel on None (1 train, 1 dev)
07/25/2024 23:49:22 - INFO - __main__ - Checking the first example...
Input:
<|begin_of_text|>equivalent<|begin_of_text|>
sentence 1: His dissent was joined by Chief Justice William H. Rehnquist and Justice Clarence Thomas. [SEP] sentence 2: Chief Justice William H. Rehnquist and Justices Antonin Scalia and Clarence Thomas dissented.<|begin_of_text|>
equivalent<|begin_of_text|>
sentence 1: The 20 master computers are located in the United States, Canada and Korea, Mr. Kuo said. [SEP] sentence 2: The computers were located in the United States, Canada and South Korea, he said.<|begin_of_text|>
equivalent<|begin_of_text|>
sentence 1: He said there were complex reasons for the increased numbers of cases and scientists were only just beginning to understand the risk factors. [SEP] sentence 2: However, he said, The reasons behind the increase in incidence are more complex and were just beginning to understand the risk factors.<|begin_of_text|>
equivalent<|begin_of_text|>
sentence 1: Meanwhile, the global death toll approached 770 with more than 8,300 people sickened since the severe acute respiratory syndrome virus first appeared in southern China in November. [SEP] sentence 2: The global death toll from SARS was at least 767, with more than 8,300 people sickened since the virus first appeared in southern China in November.<|begin_of_text|>
equivalent
Output:
<|begin_of_text|>
sentence 1: He said the foodservice pie business doesn 't fit the company's long-term growth strategy. [SEP] sentence 2: " The foodservice pie business does not fit our long-term growth strategy.
07/25/2024 23:49:22 - INFO - __main__ - out/channel-metaicl/glue-mrpc-test-channel-k=4-s=87.pkl
07/25/2024 23:49:22 - INFO - __main__ - Macro-F1 of None over 1 target tasks: 0.0
Again, we are currently not planning to incorporate or support more models in this codebase. A quick suggestion looking at your output is that the BOS token being everywhere looks suspicious. Taking a look at the exact output of the model might also help.
Hi! I want to use the fairseq 13B model disscussed in your paper. Could you tell me what should I do?