nmichlo / disent

🧶 Modular VAE disentanglement framework for python built with PyTorch Lightning ▸ Including metrics and datasets ▸ With strongly supervised, weakly supervised and unsupervised methods ▸ Easily configured and run with Hydra config ▸ Inspired by disentanglement_lib
https://disent.michlo.dev
MIT License
122 stars 18 forks source link

[BUG]: Slow sampling from Dataloaders with `in_memory=False` #49

Closed gorkamunoz closed 2 weeks ago

gorkamunoz commented 2 weeks ago

Hi! First of all thanks for the awesome package! I have encountered a problem while loading datasets through DisenDataset. For example, I am running:

data = DSpritesData()
dataset = DisentDataset(dataset=data, sampler=SingleSampler(), transform=ToImgTensorF32())
dataloader = DataLoader(dataset=dataset, batch_size=128, shuffle=True, num_workers=1)

Then, getting batches from this dataloader is very slow (e.g. getting one batch using next(iter(dataloader)) takes more than 10 seconds). I have played a bit with the input parameters but couldn't make it work faster.

For instance, I written a custom class that creates a dataloader with similar properties (i.e. that returns dictionaries with the x_targ key) as follows:

class CustomDataset(Dataset):
    def __init__(self, tensor):
        self.tensor = tensor

    def __len__(self):
        return self.tensor.size(0)

    def __getitem__(self, idx):
        return self.tensor[idx]

def custom_collate_fn(batch):
    return {'x_targ': [torch.stack(batch)]}

dataset = CustomDataset(dstipes_data_raw) # dstipes_data_raw is a torch.tensor with the full dataset
dataloader_custom = DataLoader(dataset, batch_size=batch_size, collate_fn=custom_collate_fn, shuffle=True)

and this one works just fine.

Any clue what could be slowing the dataloaders in DisentDataset? Thanks in advance!

nmichlo commented 2 weeks ago

Hi there,

Thanks for reporting this bug! Very strange that is taking so long.

Can I ask what versions of python and python packages you are using pip freeze? Maybe something has broken?

gorkamunoz commented 2 weeks ago

Hi, thanks for the quick answer. I am using a conda environment with python=3.11 and all libraries have been installed when installing disent from pip yesterday. The disent version is 0.8.0, torch is 2.4.0.

This slow behavior seems to be happening only for DSpritesData and Shapes3dData. I have tried XYObjectData, SmallNorbData and Cars3dData and for all three getting one batch takes ~0.5 secs. For the other two it takes around 20 seconds. As said, with a typical use of Dataloader with both these datasets I recover normal sampling times, that's why I think something is happening in DSpritesData and Shapes3dData or DisentDataset.

The code I use to get the loader for all these Data classes is:

data = Data Class
dataset = DisentDataset(dataset=data, sampler=SingleSampler(), transform=ToImgTensorF32())
dataloader = DataLoader(dataset=dataset, batch_size=128, shuffle=True, num_workers=1)
nmichlo commented 2 weeks ago

replicated your env.

There seem to be issues with parallel processing interactions with the data loader.

If you set num_workers=0, then the issue is resolved but only loaded with a single worker.

If you keep it as num_workers>0, then It also seems to be much better if you do:

dataloader_itr = iter(dataloader)
for item in dataloader_itr:
    pass

Rather than the following, which is very strange.

for item in dataloader:
    pass
# OR
for item in iter(dataloader):
    pass
CODE ```python import logging from torch.utils.data import DataLoader from disent.dataset import DisentDataset from disent.dataset.data import DSpritesData from disent.dataset.sampling import SingleSampler from disent.dataset.transform import ToImgTensorF32 from disent.util.profiling import Timer if __name__ == '__main__': logging.basicConfig(level=logging.INFO) data = DSpritesData(prepare=True, in_memory=False) dataset = DisentDataset( dataset=data, sampler=SingleSampler(), transform=ToImgTensorF32(), ) # 0 workers for no additional processes # 1 worker or more starts in background process(es) so init and teardown overhead dataloader = DataLoader( dataset=dataset, batch_size=128, shuffle=True, num_workers=1 ) p = lambda: print(t.name, t) or t.restart() # 7s with Timer(f"[] for iter(dataloader) x10") as t: with Timer(f"* for iter(dataloader) x10") as t: for i, item in enumerate(iter(dataloader)): p() if i >= 10: break # 2s with Timer(f"[] next(dataloader_itr) x10") as t: with Timer(f"* next(dataloader_itr) x10") as t: dataloader_itr = iter(dataloader) for i in range(10): next(dataloader_itr) p() if i >= 10: break # 7s with Timer(f"[] for dataloader x10") as t: with Timer(f"* for dataloader x10") as t: for i, item in enumerate(dataloader): p() if i >= 10: break # ~70s with Timer(f"[] next(iter(dataloader)) x10") as t: with Timer(f"* next(iter(dataloader)) x10") as t: for i in range(10): next(iter(dataloader)) p() if i >= 10: break ```
gorkamunoz commented 2 weeks ago

Yes, I think there is something weird going on with the parallelization, as having more workers actually leads to slower runs (even on the XYobject data class..).

I have also found a way to solve the problem by setting the in_memory parameter in the data classes to True. Is it to be expected?

nmichlo commented 2 weeks ago

That is unexpected that in_memory resolved this.

It's possible h5py internals have changed OR some other issue. Will try dive into this a bit.

nmichlo commented 2 weeks ago

Are you running into saturation issues feeding your model with a single thread?

EDIT: workers=0

gorkamunoz commented 2 weeks ago

No, I was able to train in all cases, both with workers = 1 or workers > 1

nmichlo commented 2 weeks ago

DSpritesData(in_memory=False)

I am not noticing any performance issues after the data loaders are initialised. This is expected from multiple processes with considerable setup and teardown time.

RESULTS **3.11, workers=0** ``` # * for iter(dataloader) x10 101.835ms # * for iter(dataloader) x10 36.533ms # * for iter(dataloader) x10 29.710ms # * for iter(dataloader) x10 31.741ms # * for iter(dataloader) x10 26.128ms # * for iter(dataloader) x10 23.105ms # * for iter(dataloader) x10 24.163ms # * for iter(dataloader) x10 22.612ms # * for iter(dataloader) x10 21.664ms # * for iter(dataloader) x10 23.103ms # * for iter(dataloader) x10 21.409ms # INFO:disent.util.profiling:* for iter(dataloader) x10: 4.086ms # INFO:disent.util.profiling:[] for iter(dataloader) x10: 366.401ms # * next(dataloader_itr) x10 38.279ms # * next(dataloader_itr) x10 27.176ms # * next(dataloader_itr) x10 21.213ms # * next(dataloader_itr) x10 21.106ms # * next(dataloader_itr) x10 20.284ms # * next(dataloader_itr) x10 18.863ms # * next(dataloader_itr) x10 17.833ms # * next(dataloader_itr) x10 17.974ms # * next(dataloader_itr) x10 17.970ms # INFO:disent.util.profiling:* next(dataloader_itr) x10: 1.000µs # INFO:disent.util.profiling:[] next(dataloader_itr) x10: 217.873ms # * next(dataloader_itr) x10 16.980ms # * for dataloader x10 28.529ms # * for dataloader x10 17.639ms # * for dataloader x10 16.693ms # * for dataloader x10 17.243ms # * for dataloader x10 15.508ms # * for dataloader x10 15.167ms # * for dataloader x10 16.272ms # * for dataloader x10 16.184ms # * for dataloader x10 15.608ms # * for dataloader x10 13.681ms # * for dataloader x10 14.040ms # INFO:disent.util.profiling:* for dataloader x10: 3.332ms # INFO:disent.util.profiling:[] for dataloader x10: 190.057ms # * next(iter(dataloader)) x10 28.243ms # * next(iter(dataloader)) x10 26.142ms # * next(iter(dataloader)) x10 28.807ms # * next(iter(dataloader)) x10 29.123ms # * next(iter(dataloader)) x10 27.872ms # * next(iter(dataloader)) x10 28.504ms # * next(iter(dataloader)) x10 27.637ms # * next(iter(dataloader)) x10 26.384ms # * next(iter(dataloader)) x10 26.132ms # * next(iter(dataloader)) x10 25.694ms # INFO:disent.util.profiling:* next(iter(dataloader)) x10: 1.000µs # INFO:disent.util.profiling:[] next(iter(dataloader)) x10: 274.704ms ``` **3.11, workers=4** ``` # * for iter(dataloader) x10 2.090s # * for iter(dataloader) x10 349.000µs # * for iter(dataloader) x10 88.000µs # * for iter(dataloader) x10 124.000µs # * for iter(dataloader) x10 23.745ms # * for iter(dataloader) x10 119.000µs # * for iter(dataloader) x10 104.000µs # * for iter(dataloader) x10 110.000µs # * for iter(dataloader) x10 39.694ms # * for iter(dataloader) x10 1.299ms # * for iter(dataloader) x10 94.000µs # INFO:disent.util.profiling:* for iter(dataloader) x10: 20.019s # INFO:disent.util.profiling:[] for iter(dataloader) x10: 22.175s # * next(dataloader_itr) x10 2.356s # * next(dataloader_itr) x10 308.000µs # * next(dataloader_itr) x10 220.000µs # * next(dataloader_itr) x10 134.000µs # * next(dataloader_itr) x10 28.275ms # * next(dataloader_itr) x10 1.630ms # * next(dataloader_itr) x10 60.000µs # * next(dataloader_itr) x10 157.000µs # * next(dataloader_itr) x10 18.618ms # * next(dataloader_itr) x10 79.000µs # INFO:disent.util.profiling:* next(dataloader_itr) x10: 12.000µs # INFO:disent.util.profiling:[] next(dataloader_itr) x10: 2.405s # * for dataloader x10 2.078s # * for dataloader x10 243.000µs # * for dataloader x10 172.000µs # * for dataloader x10 98.000µs # * for dataloader x10 18.942ms # * for dataloader x10 160.000µs # * for dataloader x10 496.000µs # * for dataloader x10 68.000µs # * for dataloader x10 14.423ms # * for dataloader x10 121.000µs # * for dataloader x10 2.020ms # INFO:disent.util.profiling:* for dataloader x10: 20.017s # INFO:disent.util.profiling:[] for dataloader x10: 22.132s # * next(iter(dataloader)) x10 17.662s # * next(iter(dataloader)) x10 17.494s # * next(iter(dataloader)) x10 12.479s # * next(iter(dataloader)) x10 22.320s # * next(iter(dataloader)) x10 12.234s # * next(iter(dataloader)) x10 6.907s # * next(iter(dataloader)) x10 11.896s # * next(iter(dataloader)) x10 11.911s # * next(iter(dataloader)) x10 16.891s # * next(iter(dataloader)) x10 11.992s # INFO:disent.util.profiling:* next(iter(dataloader)) x10: 9.000µs # INFO:disent.util.profiling:[] next(iter(dataloader)) x10: 2m:21s ```
nmichlo commented 2 weeks ago

DSpritesData(in_memory=False)

Here are benchmarks with tqdm. Initial setup time is still high, but after it gets going it seems fine.

CODE ```python import itertools import logging from torch.utils.data import DataLoader from tqdm import tqdm from disent.dataset import DisentDataset from disent.dataset.data import DSpritesData from disent.dataset.sampling import SingleSampler from disent.dataset.transform import ToImgTensorF32 from disent.util.profiling import Timer if __name__ == '__main__': logging.basicConfig(level=logging.INFO) data = DSpritesData(prepare=True, in_memory=False) dataset = DisentDataset( dataset=data, sampler=SingleSampler(), transform=ToImgTensorF32(), ) for workers, batch_size in itertools.product([0, 1, 4], [1, 16, 128]): dataloader = DataLoader( dataset=dataset, batch_size=128, shuffle=True, num_workers=workers ) tqdm.write("") tqdm.write(f'===== [workers={workers}, batch_size={batch_size}] =====') tqdm.write("") t = Timer() t.__enter__() with tqdm(desc=f"[workers={workers}, batch_size={batch_size}] dataloader", position=0) as p: for i, item in enumerate(dataloader): if i % 100 == 0: tqdm.write(f'[{i}] {t}') t.restart() p.update(batch_size) if i >= 1000: p.close() break tqdm.write(f'[end] {t}') tqdm.write("") ```
RESULTS ```python [workers=0, batch_size=1] dataloader: 0it [00:00, ?it/s] ===== [workers=0, batch_size=1] ===== [workers=0, batch_size=1] dataloader: 0it [00:00, ?it/s][0] 27.269ms [workers=0, batch_size=1] dataloader: 100it [00:00, 186.70it/s][100] 5.248ms [workers=0, batch_size=1] dataloader: 200it [00:01, 195.84it/s][200] 5.196ms [workers=0, batch_size=1] dataloader: 300it [00:01, 150.67it/s][300] 6.160ms [workers=0, batch_size=1] dataloader: 400it [00:02, 185.96it/s][400] 5.347ms [workers=0, batch_size=1] dataloader: 500it [00:02, 196.02it/s][500] 4.906ms [workers=0, batch_size=1] dataloader: 600it [00:03, 204.90it/s][600] 4.872ms [workers=0, batch_size=1] dataloader: 704it [00:03, 203.38it/s][700] 4.737ms [workers=0, batch_size=1] dataloader: 800it [00:04, 207.84it/s][800] 4.796ms [workers=0, batch_size=1] dataloader: 900it [00:04, 207.52it/s][900] 4.740ms [workers=0, batch_size=1] dataloader: 1001it [00:05, 194.93it/s] [workers=0, batch_size=16] dataloader: 0it [00:00, ?it/s][1000] 4.686ms [end] 5.893ms ===== [workers=0, batch_size=16] ===== [workers=0, batch_size=16] dataloader: 0it [00:00, ?it/s][0] 17.745ms [workers=0, batch_size=16] dataloader: 1696it [00:00, 3358.04it/s][100] 4.763ms [workers=0, batch_size=16] dataloader: 3200it [00:00, 3364.62it/s][200] 4.636ms [workers=0, batch_size=16] dataloader: 4864it [00:01, 3399.29it/s][300] 4.623ms [workers=0, batch_size=16] dataloader: 6400it [00:01, 3392.88it/s][400] 4.657ms [workers=0, batch_size=16] dataloader: 8032it [00:02, 3405.28it/s][500] 4.657ms [workers=0, batch_size=16] dataloader: 9600it [00:02, 3416.79it/s][600] 4.906ms [workers=0, batch_size=16] dataloader: 11200it [00:03, 3414.92it/s][700] 4.909ms [workers=0, batch_size=16] dataloader: 12800it [00:03, 3410.63it/s][800] 4.638ms [workers=0, batch_size=16] dataloader: 14400it [00:04, 3403.78it/s][900] 5.189ms [workers=0, batch_size=16] dataloader: 16016it [00:04, 3374.00it/s] [1000] 4.812ms [end] 7.035ms ===== [workers=0, batch_size=128] ===== [workers=0, batch_size=128] dataloader: 0it [00:00, ?it/s][0] 19.148ms [workers=0, batch_size=128] dataloader: 13440it [00:00, 27115.46it/s][100] 4.615ms [workers=0, batch_size=128] dataloader: 25600it [00:00, 27236.20it/s][200] 4.738ms [workers=0, batch_size=128] dataloader: 38784it [00:01, 27118.93it/s][300] 4.598ms [workers=0, batch_size=128] dataloader: 51200it [00:01, 26938.31it/s][400] 4.816ms [workers=0, batch_size=128] dataloader: 64000it [00:02, 26699.09it/s][500] 4.883ms [workers=0, batch_size=128] dataloader: 76800it [00:02, 26463.94it/s][600] 4.834ms [workers=0, batch_size=128] dataloader: 89600it [00:03, 26950.24it/s][700] 4.626ms [workers=0, batch_size=128] dataloader: 102400it [00:03, 26372.21it/s][800] 4.642ms [workers=0, batch_size=128] dataloader: 115200it [00:04, 25888.80it/s][900] 4.665ms [workers=0, batch_size=128] dataloader: 128128it [00:04, 26664.43it/s] [workers=1, batch_size=1] dataloader: 0it [00:00, ?it/s][1000] 4.655ms [end] 5.784ms ===== [workers=1, batch_size=1] ===== [workers=1, batch_size=1] dataloader: 1it [00:02, 2.42s/it][0] 2.422s [workers=1, batch_size=1] dataloader: 100it [00:02, 75.10it/s][100] 5.576ms [workers=1, batch_size=1] dataloader: 200it [00:03, 152.68it/s][200] 5.558ms [workers=1, batch_size=1] dataloader: 302it [00:04, 178.88it/s][300] 5.432ms [workers=1, batch_size=1] dataloader: 400it [00:04, 186.66it/s][400] 5.466ms [workers=1, batch_size=1] dataloader: 500it [00:05, 187.70it/s][500] 5.344ms [workers=1, batch_size=1] dataloader: 600it [00:05, 191.34it/s][600] 5.370ms [workers=1, batch_size=1] dataloader: 700it [00:06, 191.02it/s][700] 5.165ms [workers=1, batch_size=1] dataloader: 800it [00:06, 192.62it/s][800] 5.170ms [workers=1, batch_size=1] dataloader: 900it [00:07, 196.36it/s][900] 4.950ms [workers=1, batch_size=1] dataloader: 1001it [00:07, 129.61it/s] [1000] 4.967ms [workers=1, batch_size=16] dataloader: 0it [00:00, ?it/s][end] 5.011s ===== [workers=1, batch_size=16] ===== [workers=1, batch_size=16] dataloader: 16it [00:02, 7.87it/s][0] 2.032s [workers=1, batch_size=16] dataloader: 1600it [00:02, 1361.30it/s][100] 5.644ms [workers=1, batch_size=16] dataloader: 3200it [00:03, 2507.07it/s][200] 5.590ms [workers=1, batch_size=16] dataloader: 4848it [00:03, 2902.34it/s][300] 5.419ms [workers=1, batch_size=16] dataloader: 6400it [00:04, 2917.14it/s][400] 5.224ms [workers=1, batch_size=16] dataloader: 8000it [00:04, 3041.68it/s][500] 5.397ms [workers=1, batch_size=16] dataloader: 9600it [00:05, 3057.18it/s][600] 5.520ms [workers=1, batch_size=16] dataloader: 11200it [00:05, 3063.02it/s][700] 5.324ms [workers=1, batch_size=16] dataloader: 12800it [00:06, 3121.84it/s][800] 5.160ms [workers=1, batch_size=16] dataloader: 14400it [00:06, 3113.14it/s][900] 5.090ms [workers=1, batch_size=16] dataloader: 16016it [00:07, 2180.93it/s] [1000] 5.105ms [workers=1, batch_size=128] dataloader: 0it [00:00, ?it/s][end] 5.015s ===== [workers=1, batch_size=128] ===== [workers=1, batch_size=128] dataloader: 128it [00:01, 65.55it/s][0] 1.953s [workers=1, batch_size=128] dataloader: 12800it [00:02, 10733.59it/s][100] 5.515ms [workers=1, batch_size=128] dataloader: 26112it [00:03, 21328.37it/s][200] 5.501ms [workers=1, batch_size=128] dataloader: 38400it [00:03, 23140.25it/s][300] 5.320ms [workers=1, batch_size=128] dataloader: 51200it [00:04, 23534.43it/s][400] 5.660ms [workers=1, batch_size=128] dataloader: 64000it [00:04, 23827.54it/s][500] 5.432ms [workers=1, batch_size=128] dataloader: 76800it [00:05, 24584.95it/s][600] 5.143ms [workers=1, batch_size=128] dataloader: 89600it [00:05, 22861.10it/s][700] 5.382ms [workers=1, batch_size=128] dataloader: 102400it [00:06, 24676.00it/s][800] 5.135ms [workers=1, batch_size=128] dataloader: 115200it [00:06, 25140.23it/s][900] 5.104ms [workers=1, batch_size=128] dataloader: 128128it [00:07, 17602.04it/s] [1000] 5.143ms [end] 5.013s ===== [workers=4, batch_size=1] ===== [workers=4, batch_size=1] dataloader: 1it [00:02, 2.43s/it][0] 2.434s [workers=4, batch_size=1] dataloader: 100it [00:02, 35.29it/s][100] 2.801ms [workers=4, batch_size=1] dataloader: 208it [00:02, 144.84it/s][200] 120.000µs [workers=4, batch_size=1] dataloader: 300it [00:02, 217.62it/s][300] 95.000µs [workers=4, batch_size=1] dataloader: 400it [00:03, 294.72it/s][400] 61.000µs [workers=4, batch_size=1] dataloader: 508it [00:03, 442.97it/s][500] 79.000µs [workers=4, batch_size=1] dataloader: 600it [00:03, 512.09it/s][600] 102.000µs [workers=4, batch_size=1] dataloader: 700it [00:03, 570.28it/s][700] 70.000µs [workers=4, batch_size=1] dataloader: 812it [00:03, 649.28it/s][800] 59.000µs [workers=4, batch_size=1] dataloader: 900it [00:03, 677.63it/s][900] 81.000µs [workers=4, batch_size=1] dataloader: 1001it [00:03, 262.80it/s] [1000] 72.000µs [end] 20.025s ===== [workers=4, batch_size=16] ===== [workers=4, batch_size=16] dataloader: 16it [00:01, 8.59it/s][0] 1.863s [workers=4, batch_size=16] dataloader: 1600it [00:02, 551.97it/s][100] 4.743ms [workers=4, batch_size=16] dataloader: 3200it [00:02, 2578.36it/s][200] 2.101ms [workers=4, batch_size=16] dataloader: 4800it [00:02, 3931.26it/s][300] 1.359ms [workers=4, batch_size=16] dataloader: 6400it [00:02, 5297.89it/s][400] 605.000µs [workers=4, batch_size=16] dataloader: 8000it [00:02, 7436.02it/s][500] 94.000µs [workers=4, batch_size=16] dataloader: 9600it [00:02, 8526.77it/s][600] 115.000µs [workers=4, batch_size=16] dataloader: 11360it [00:02, 10119.28it/s][700] 397.000µs [workers=4, batch_size=16] dataloader: 12800it [00:03, 10631.22it/s][800] 1.029ms [workers=4, batch_size=16] dataloader: 14400it [00:03, 10478.31it/s][900] 199.000µs [workers=4, batch_size=16] dataloader: 16016it [00:03, 4832.11it/s] [1000] 252.000µs [end] 20.018s ===== [workers=4, batch_size=128] ===== [workers=4, batch_size=128] dataloader: 128it [00:01, 64.86it/s][0] 1.974s [workers=4, batch_size=128] dataloader: 12800it [00:02, 4896.60it/s][100] 4.382ms [workers=4, batch_size=128] dataloader: 25856it [00:02, 21343.33it/s][200] 2.555ms [workers=4, batch_size=128] dataloader: 38400it [00:02, 31656.31it/s][300] 1.361ms [workers=4, batch_size=128] dataloader: 52096it [00:02, 46869.96it/s][400] 62.000µs [workers=4, batch_size=128] dataloader: 64000it [00:02, 55053.69it/s][500] 77.000µs [workers=4, batch_size=128] dataloader: 76800it [00:02, 64236.30it/s][600] 102.000µs [workers=4, batch_size=128] dataloader: 89984it [00:03, 78185.19it/s][700] 76.000µs [workers=4, batch_size=128] dataloader: 102400it [00:03, 82657.10it/s][800] 60.000µs [workers=4, batch_size=128] dataloader: 115200it [00:03, 82587.89it/s][900] 72.000µs [workers=4, batch_size=128] dataloader: 128128it [00:03, 37481.42it/s] [1000] 61.000µs [end] 20.021s ```
nmichlo commented 2 weeks ago

No, I was able to train in all cases, both with workers = 1 or workers > 1

I'm glad it's working for you! Would be very curious to hear about your use case.

Going to close this as resolved if that's alright.

gorkamunoz commented 2 weeks ago

Ok, thank you very much for the quick answers! I have reproduced your tests and I now think that my problems may come because the datasets are store in a secondary drive. I am still trying to understand how this makes increasing the num_workers not to help, but having the dataset in the same drive (as downloaded from your code) makes everything run smoothly.

nmichlo commented 2 weeks ago

Ahh, yes this is a fairly large constraint with the original design of the datasets loading.

When I originally built this project I had to run on fairly resource constrained systems (low memory, small GPUs), however, I had SSDs available on these machines with relatively fast disk access. This is why I chose to use the hdf5 backend for the datasets and convert them to this as I wasn't really able to store everything in memory.

Current implementation when using hdf5 reads from disk on every single datapoint access, network drive OR HDD is probably too high latency for in_memory=False, for decent performance here I would encourage you to use in-memory if you can to get around the high latency of the network drives.

nmichlo commented 2 weeks ago

The way I got around this when running experiments on the cluster I had access to was store common data on the network drive, and then init or copy everything into /tmp (on an SSD, not network/HDD) when the job first starts. Getting around network congestion and latency issues and memory limitations on nodes.