Closed zeeshankhan2 closed 1 year ago
The line causing the error is using the pattern "join an iterable using a string in between each iterable element." So,
"🏄".join(["Let's", "go", "surfing"])
Produces:
"Let's🏄go🏄surfing"
In your case, this code is expecting the dataset attribute categories
to be iterable. You can use %debug
to get more information on the specific data, but you can also check directly with something like:
from collections.abc import Iterable
for ds in data:
if 'categories' in ds and not isinstance(ds['categories'], Iterable):
print("Invalid dataset categories: {}".format(ds['categories'])
break
You will now have the invalid dataset as the object ds
that you can inspect at your leisure. There could be more than one, run the code until no more errors are found.
In general, this kind of question should be asked on Stack Overflow, it isn't a software bug.
Thanks @cmutel for the input. The issue is solved, and I hope it will be helpful for others as well. And unfortunately, I did not grasp the problem, so i thought it might be due to new version or update.But i will post on StackOverflow next time, as suggested. Thanks a lot
Hi @michaelweinold , I modified a database using wurst and when i write database using bw.write_brightway2_database i get the following error.
TypeError Traceback (most recent call last) d:\Python\Wurst\change_global_markets_location_to_rer.ipynb Cell 18 line 1 ----> 1 w.write_brightway2_database(data,"cutoff_3.9.1_updated_24-10_test_2")
File c:\Users\M. Zeeshan\anaconda3\envs\bw\Lib\site-packages\wurst\brightway\write_database.py:53, in write_brightway2_database(data, name) 51 check_internal_linking(data) 52 check_duplicate_codes(data) ---> 53 WurstImporter(name, data).write_database()
File c:\Users\M. Zeeshan\anaconda3\envs\bw\Lib\site-packages\wurst\brightway\write_database.py:21, in WurstImporter.write_database(self) 19 assert not self.statistics()[2], "Not all exchanges are linked" 20 assert self.db_name not in databases, "This database already exists" ---> 21 super().write_database()
File c:\Users\M. Zeeshan\anaconda3\envs\bw\Lib\site-packages\bw2io\importers\base_lci.py:269, in LCIImporter.write_database(self, data, delete_existing, backend, activate_parameters, **kwargs) 266 self.write_database_parameters(activate_parameters, delete_existing) 268 existing.update(data) --> 269 db.write(existing) 271 if activate_parameters: 272 self._write_activity_parameters(activity_parameters)
File c:\Users\M. Zeeshan\anaconda3\envs\bw\Lib\site-packages\bw2data\project.py:358, in writable_project(wrapped, instance, args, kwargs) 356 if projects.read_only: 357 raise ReadOnlyProject(READ_ONLY_PROJECT) --> 358 return wrapped(*args, **kwargs)
File c:\Users\M. Zeeshan\anaconda3\envs\bw\Lib\site-packages\bw2data\backends\peewee\database.py:266, in SQLiteBackend.write(self, data, process) 263 self.delete(warn=False) 264 raise --> 266 self.make_searchable(reset=True) 268 if process: 269 self.process()
File c:\Users\M. Zeeshan\anaconda3\envs\bw\Lib\site-packages\bw2data\project.py:358, in writable_project(wrapped, instance, args, kwargs) 356 if projects.read_only: 357 raise ReadOnlyProject(READ_ONLY_PROJECT) --> 358 return wrapped(*args, **kwargs)
File c:\Users\M. Zeeshan\anaconda3\envs\bw\Lib\site-packages\bw2data\backends\peewee\database.py:311, in SQLiteBackend.make_searchable(self, reset) 309 databases.flush() 310 IndexManager(self.filename).delete_database() --> 311 IndexManager(self.filename).add_datasets(self)
File c:\Users\M. Zeeshan\anaconda3\envs\bw\Lib\site-packages\bw2data\search\indices.py:47, in IndexManager.add_datasets(self, datasets) 45 writer = self.get().writer() 46 for ds in datasets: ---> 47 writer.add_document(**self._format_dataset(ds)) 48 writer.commit()
File c:\Users\M. Zeeshan\anaconda3\envs\bw\Lib\site-packages\bw2data\search\indices.py:35, in IndexManager._format_dataset(self, ds) 29 def _format_dataset(self, ds): 30 fl = lambda o: o[1].lower() if isinstance(o, tuple) else o.lower() 31 return dict( 32 name=ds.get(u"name", u"").lower(), 33 comment=ds.get(u"comment", u"").lower(), 34 product=ds.get(u"reference product", u"").lower(), ---> 35 categories=u", ".join(ds.get(u"categories", [])).lower(), 36 location=fl(ds.get(u"location", u"")), 37 database=ds[u"database"], 38 code=ds['code'] 39 )
TypeError: can only join an iterable
Would be thankful to get guidance how to solve the issue. Thanks