pandas-dev / pandas

Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
https://pandas.pydata.org
BSD 3-Clause "New" or "Revised" License
43.34k stars 17.81k forks source link

Allow custom metadata to be attached to panel/df/series? #2485

Closed ghost closed 4 years ago

ghost commented 11 years ago

related: https://github.com/pydata/pandas/issues/39 (column descriptions) https://github.com/pydata/pandas/issues/686 (serialization concerns) https://github.com/pydata/pandas/issues/447#issuecomment-11152782 (Feature request, implementation variant)

Ideas and issues:

jreback commented 11 years ago

storage of this data is is pretty easy to implement in HDFStore. (not pushing HDFStor as a general mechanism!)

general thoughts on meta data:

specific to HDFStore:

ghost commented 11 years ago

pytables it is a very good fit in terms of features, but:

jreback commented 11 years ago

oh - was not suggesting we use this as a backend for specific storage of meta deta in general (the above points were my comments in general on meta data - reading it again it DOES look like I am pushing HDFStore)

was just pointing out that HDFStore can support meta deta if pandas structures do

to answer your questions

ghost commented 11 years ago

+1 for all meta data living under a single attribute

I'm against allowing non-serializable objects as metadata, at all. But not sure if that should be a constraint on the objects or the serialization format.

in any case, a hook+type tag mechanism would allow users to plant ids of external objects and reconstruct things at load-time. I've been thinking of suggesting a hooking mechanism elsewhere (for custom representations of dataframes - viz, html and so on).

gerigk commented 11 years ago

what do you mean by not in memory capable?

HDF5 has an in memory+ stdout writer and pytables support has been added recently.

(https://github.com/PyTables/PyTables/pull/173)

On Tue, Dec 11, 2012 at 8:31 AM, jreback notifications@github.com wrote:

oh - was not suggesting we use this as a backend for specific storage of meta deta in general just that HDFStore can support meta deta if pandas structures do

to answer your questions

  • not a hard dependency - nor should pandas make it one
  • not yet py3 (being worked on now I believe)
  • not in memory capable

    — Reply to this email directly or view it on GitHubhttps://github.com/pydata/pandas/issues/2485#issuecomment-11233812.

ghost commented 11 years ago

oh. I wasn't aware of that and didn't find anythig in the online docs. This seems to have been added after the latest 2.4.0 pytables release and so is not yet available off pypi or the distros.

hughesadam87 commented 11 years ago

Thanks for including me on this request y-p.

IMO, it seems like we should not try to prohibit objects as metadata based on their serialization capacity. I only say this because how would one account for every possible object? For example, Chaco plots from the Enthought Tool Suite don't serialize easily, but who would know that unless they tried. I think that it's best to let users put anything as metadata, and if it can't serialize, then they'll know when an error is thrown. It is also possible to have the program serialize everything but the metadata, and then just alert the user that this aspect has been lost.

Does anyone here know the pandas source code well enough to understand how to implement something like this? I really don't have a clue, but hope this isn't asking too much of the developers.

Also, I think this addition will be a nice way to appease people who are always looking to subclass a dataframe.

up vote for adding attribute being called 'meta' up vote for putting it on Index classes as well as Series, DataFrame and Panel

dalejung commented 11 years ago

Last time I checked, HDF5 has a limit on the size of the AttributeSet. I had to get around it by having my store object encapsulate a directory, with .h5 and pickled meta objects.

dalejung commented 11 years ago

I think that adding metadata to the DataFrame object requires that it serialize and work with all backends (pickle, hdf5, etc). Which probably means restricting the type of metadata that can be added. There are corner cases to pickling custom classes that would become pandas problems.

hughesadam87 commented 11 years ago

Hi guys. I'm a bit curious about something. This fix is currently addressing adding custom attributes to a dataframe. The values of these attributes, they can be python functions no? If so, this might be a workaround to adding custom instance methods to a dataframe. I know some people way back when were interested in this possibility.

I think the way this could work is the dataframe should have a new method, call it... I dunno, add_custom_method(). This would take in a function, then add the function to the 'meta' attribute dictionary, with some sort of traceback to let the program know it is special.

When the proposed new machinery assigns custom attributes to the new dataframe, it also may be neat to automatically promote such a function to an instance method. If it could do that, then we would have a way to effectively subclass a DataFrame without actually doing so.

This is likely overkill for the first first go around, but maybe something to think about down the road.

jreback commented 11 years ago

@dalejung do you have a link to the AttributeSet limit? @hugadams you can simply monkey-patch if you want custom instance methods

import pandas
def my_func(self, **kwargs):
    return self * 2
pandas.DataFrame.my_func = myfunc
hughesadam87 commented 11 years ago

@jreback: Thanks for pointing this out man. I've heard of monkeypatching instance methods, but always thought it was more of a colloquialism for something more difficult.

Thanks for showing me this.

dalejung commented 11 years ago

@jreback http://www.hdfgroup.org/HDF5/doc/UG/13_Attributes.html#SpecIssues maybe? It's been awhile and it could be that pytables hasn't implemented new HDF5 features.

Personally, I had a dataset with ~40k items of metadata. Nothing complicated, just large. It was much easier to just pickle that stuff separately and use HDF for the actual data.

jreback commented 11 years ago

@dalejung thanks for the link....I am not sure of use-cases for meta data beyond simple structures anyhow....if you have regular data you can always store as separate structures or pickle or whatever....

jreback commented 11 years ago

@hugadams np....good luck

dalejung commented 11 years ago

@jreback sure, but that's kind of the state now. You can use DataFrames as attributes of custom classes. You can keep track of your metadata separately.

My point is that there would be an expectation for the DataFrame metadata serialization to work. The HDF5 limit is worse because it's based on size and not type, which means it can work until it suddenly does not.

There are always going to be use-cases we don't think of. Adding a metadata attribute that sometimes saves will be asking for trouble.

jreback commented 11 years ago

@dalejung ok...once this PR GH #2497 is merged in you can try this out in a limited way (limited because data frames don't 'yet' pass this around). could catch errors if you try to store too much (not much to do in this case EXCEPT fail)

ghost commented 11 years ago

Looks like the for and against of the thorny serialization issue are clear.

Here is another thorny issue - what's the semantics of propegating meta through operations?

df1.meta.observation_date = "1/1/1981"
df1.meta.origin = "tower1"
df2.meta.observation_date = "1/1/1982"
df2.meta.origin = "tower2"

df3=pd.concat(df1,df2)
# or merge, addition, ix, apply, etc'

Now, what's the "correct" meta for df3?

I'd be interested to hear specific examples of the problems you hope this will solve for you, what are the kinds of meta tags you wish you had for your work?

`

dalejung commented 11 years ago

@y-p I agree that propagation logic gets wonky. From experience, whether to propagate meta1/meta2/nothing is specific to the situation and doesn't follow any rule.

Maybe the need for metadata would be fulfilled by easier composition tools? For example, I tend to delegate attribute calls to the child dataframe and also connect the repr/str. There are certain conveniences that pandas provides that you lose with a simple composition.

Thinking about it, an api like the numpy array might be useful to allow composition classes to substitute for DataFrames.

hughesadam87 commented 11 years ago

Hi y-p. You bring up very good points in regard to merging. My thoughts would be that merged quantities that share keys should store results in a tuple, instead of overwriting; however, this is still a unfavorable situation.

You know, once the monkey patching was made clear to me by jreback, I realized that I could most likely get all the functionality I was looking for in custom attributes. Perhaps what would be more helpful at this point, rather than custom attributes, would be a small tutorial on the main page about how to monkey patch and customize pandas DataStructures.

In my personal situation, I no longer feel that custom metadata would really make or break my projects if monkey patching is adequate; however, you guys seem to have a better overview of pandas, so I think that it really is your judgement call if the new pros of metadata would outweigh the cons.

ghost commented 11 years ago

Thanks for all the ideas, here is my summary:

  1. It might be useful to attach metadata to serialized files as opposed to live objects.
  2. people want to extend functionality in a natural way, rather then adding meta data, even if it makes no sense to have it as part of upstream. monkey-patching is a useful idiom for that . I use it myself in my IPython startup file. (#2530)
  3. Allowing arbitrary metadata to be added to live objects makes little sense when mutation is inevitabely involved. well-defined metadata tags are bound to be either domain-specific, or suitable to be "baked in" when general enough.
  4. There might be an existing need for a "scientific data container" file format, probably to be designed by a commitee over several years, producing a 250 page standard with a name like IKF-B76/J-2017, not adopted by anyone outside the US DOD energy research lab community. pandas is not it though.

Dropping the milestone for now but will leave open if someone has more to add. if you need (1), please open an issue and explain your use-case.

hughesadam87 commented 11 years ago

Hey y-p. Thanks for leaving this open. It turns out that monkey patching has not solved my problem as I originally thought it would.

Yes, monkey patching does allow one to add custom instance methods and attributes to a dataframe; however, any function that results in a new dataframe will not retain the values of these custom attributes.

From an email currently on the mailing list:

import pandas

pandas.DataFrame.test=None

df=pandas.DataFrame(name='Bill')
df.name
>>> 'Bill'

df2=df.mul(50)
df2.name
>>>

I've put together a custom dataframe for spectroscopy that I'm very excited about putting at the center of a new spectroscopy package; however, realized that every operation that returns a new dataframe resets all of my custom attributes. The instance methods and slots for the attributes are retained, so this is better than nothing, but still is going to hamper my program.

The only workaround I can find is to add some sort of attribute transfer function to every single dataframe method that I want to work with my custom dataframe. Thus, the whole point of making my object a custom dataframe is lost.

With this in mind, I think monkey patching is not adequate unless there's a workaround that I'm not aware of. Will see if anyone replies on the mailing list.

jreback commented 11 years ago

@hugadams you are probably much better off to create a class to hold both the frame and the meta and then forward methods as needed to handle manipulations...something like

class MyObject(object):

   def __init__(self, df, meta):
         self.df = df
         self.meta = meta

   @property
    def ix(self):
          return self.df.ix

depending on what exactly you need to do, the following will work

o = MyObject(df, meta)
o.ix[:,'foo'] = 'bar'
o.name = 'myobj'

and then you can custom serialization, object creation, etc you coulud event allow getattr to automatically forward methods to df/meta as needed

only gets tricky when you do mutations

o.df = o.df * 5

you can even handle this by defining __mul__ in MyObject

you prob have a limited set of operations that you really want to support, power users can just reach in and grab o.df if they need to...

hth

hughesadam87 commented 11 years ago

@jreback

Thanks for the input. I will certainly keep this in mind if the metadata idea of this thread never reaches fruition, as it seems to be the best way forward. Do you know offhand how I can implement direct slicing eg:

o['col1'] instead of o.df['col1']

I wasn't sure how to transfer that functionality to my custom object without a direct call to the underlying dataframe.

Thanks for pointing out the mul redefintion. This will help me going forward.

This really does feel like a roundabout solution to the Dataframe's inability to be subclassed. Especially if my custom object were to evolve with pandas, this would require maintenance to keep it synced up with changes to the Dataframe API.

What if we do this- Using jreback's example, we create a generic class with the specific intention of being subclassed for custom use? We can include the most common Dataframe methods and update all the operators accordingly. Then, hopeless fools like me who come along with the intent to customize have a really strong starting point.

I think that pandas' full potential has yet to be recognized by the research community, and anticipate it will diffuse into many more scientific fields. As such, if we could present them with a generic class for customizing dataframes, then researchers may be more inclined to build packages around pandas, rather than coming up with their own ad-hoc datastructures.

jreback commented 11 years ago

There are only a handful of methods you prob need to worry about, you can always access df anyhow e.g. arithmetic, getitem,setitem,ix, maybe boolean

depends on what you want the user to be able to do with your object python is all about least suprise. an object should do what you expect; in this case you are having your object quack like a DataFrame with extra attributes, or are you really do more complex stuff like redefiing the way operators work?

for example you could redefine * to mean call my cool multiplier function, and in some fields this makes sense (e.g. frequency domain analysis you want * to mean convolution)

can you provide an example of what you are trying to do?

# to provide: o['col1'] access

def __getitem__(self, key):

     # you could intercept calls to metadata here for example
      if key in meta:
           return meta[key]

     return self.df.__getitem__(self, key)
hughesadam87 commented 11 years ago

All I'm doing is creating a dataframe for spectral data. As such, it has a special index type that I've written called "SpecIndex" and several methods for transforming itself to various representations of data. It also has special methods for extending how temporal data is managed. In any case, these operations are well-contained in my monkey patched version, and also would be easily implemented in a new class formalism as you've shown.

After this, it really should just quack. Besides these spectroscopic functions and attributes, it should behave like a dataframe. Therefore, the most common operations on the dataframe, I would prefer to be seemless and promote to instance methods. I want to encourage users to learn pandas and use this tool for exploratory spectroscopy. As such, I'm trying to intercept any inconsistencies ahead of time like the one you pointed out about o.df=o.df * 5. Will I have to change the behavior of all the basic operators (eg * / + -) or just *? Any caveat like this, I'd like to correct in advance. In the end, I want the class layer itself to be as invisible as possible.

Do any more of these gotchas that come to mind?

dalejung commented 11 years ago

It's best to think of Pandas objects like you do integers. If you had a hypothetical Person object, its height would just be a number. The number would have no idea it was a height or what unit it was in. It's just there for numerical operations. height / height_avg doesn't care about the person's sex, weight, or race.

I think when the DataFrame is the primary data object this seems weird. But imagine that the Person object had a weight_history attribute. It wouldn't make sense to subclass a DataFrame to hold that attribute. Especially if other Pandas objects existed in Person data.

subclassing/metadata will always run into issues when doing exploratory analysis. Does SubDataFrame.tail() return a SubDataFrame? If it does, will it keep the same attributes? Do we want to make copy of the dict for all ops like +/-*?

After a certain point it becomes obvious that you're not working with a Person or SpectralSeries. You're working on an int or a DataFrame. In the same way that convert_height(Person person) isn't more convenient than convert_height(int height), getting your users into the mindset that a DataFrame is just a data type will be simpler in the long run. Especially if your class gets more complicated and needs to hold more than one Pandas object.

jreback commented 11 years ago

@hugadams I would suggest looking at the various tests in pandas/tests/test_frame.py, and creating your own test suite. You can start by using your 'DataFrame' like object and see what breaks (obviously most things will break at the beginning). Then skip tests and/or fix things as you go.

You will probably want to change most of the arithmetic operations (e.g. * / + - ), e.g. anything you want a user to be able to do.

dalejung commented 11 years ago

@hugadams If you want to see an old funky attempt at subclassing a df: http://nbviewer.ipython.org/4238540/

It quasi works because pretty much every DataFrame magic method calls another method, this gets intercepted in getattribute and redirected to the SubDF.df.

hughesadam87 commented 11 years ago

Thanks for all the help guys. I think agree that maybe subclassing will get me into trouble, and thanks for sharing the example.

I will attempt jreback's implementation, but may I first ask one final thing?

The only reason I want special behavior/subclassing is that I want my custom attributes to persist after operations on the dataframe. Looking at this subclass example, it leads me to believe that if it may not be so difficult to change the correct methods such that these few new fixed attributes are transferred to any new dataframe created from an instance method or general operation. I mean, dataframes already preserve their attribute values upon mutation. How hard would it be to simply add my handful of new attributes into this machinery? This seems like it might be less work than building an entirely new class just to store attributes. (Instance methods can be monkey patched afterall).

@dalejung, in the simplest case, if all I wanted to do was add a "name" attribute to dataframe, such that its value will persist after doing:

df=df*5

or

df=df.apply( somfunc )

Would this be an easy hack to the source you provided?

jreback commented 11 years ago

@hugadams 'has-a' allows your custom attributes to persist regardless of the changes on the df or more to the point, change when you want them to change (and, as an aside, say you implemented disk-based persistence, then this would be easy)

generally i use 'is-a' only when I really need specialized types

when you need a bit of both, you can do a mix-in (not advocating that here as this gets even more complicated!)

hughesadam87 commented 11 years ago

@jreback

Well, my current monkey patched dataframe already works and sets attributes in a self consistent manner. Therefore, the only problem I have is that the values of these attributes are lost under operations that return a new dataframe. From what I've learned then, my best solution would be simply to use this object as is, then implement the 'has-a' behavior you mentioned.

I apologize for being so helpless, but can you elaborate or provide an example of what you mean with 'has-a'? I don't see where in the Dataframe sourcecode this is used.

jreback commented 11 years ago

@hugadams 'has-a' is just another name for the container class I described above, 'is-a' is sublcassing

hughesadam87 commented 11 years ago

@jreback Ok, thanks man. I will go for it, I appreciate your help.

dalejung commented 11 years ago

@hugadams It could persist the .name if you transferred in the wrap. but there will be corner cases where this will break down. Honestly, it's not a worthwhile path to go down. It's much better to use composition and keep the DataFrame a separate entity.

hughesadam87 commented 11 years ago

@dalejung

Thanks. I will try using composition.

ghost commented 11 years ago

Here's a wrong way to do it:

In [1]: import pandas as pd
   ...: def hook_class_init(cls):
   ...:     def f(self,*args,**kwds):
   ...:         from inspect import currentframe
   ...:         f.__orig_init__(self,*args,**kwds)
   ...:         obj = currentframe().f_back.f_locals.get('self')
   ...:         if isinstance(obj,self.__class__) and hasattr(obj,"meta"):
   ...:             setattr(self,"meta",getattr(obj,'meta'))  
   ...:     if not hasattr(cls.__init__,"__orig_init__"):
   ...:         f.__orig_init__ = cls.__init__
   ...:         cls.__init__=f
   ...: 
   ...: def unhook_class_init(cls):
   ...:     if hasattr(cls.__init__,"__orig_init__"):
   ...:         cls.__init__=cls.__init__.__orig_init__

In [2]: hook_class_init(pd.DataFrame)
   ...: df1=mkdf(10,4)
   ...: df1.meta=dict(name="foo",weight="buzz")
   ...: x=df1.copy()
   ...: print x.meta
{'name': 'foo', 'weight': 'buzz'}

In [3]: unhook_class_init(pd.DataFrame)
   ...: x=df1.copy()
   ...: print x.meta
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-3-ed39e9901bfc> in <module>()
      1 unhook_class_init(pd.DataFrame)
      2 x=df1.copy()
----> 3 print x.meta

/home/user1/src/pandas/pandas/core/frame.pyc in __getattr__(self, name)
   2020             return self[name]
   2021         raise AttributeError("'%s' object has no attribute '%s'" %
-> 2022                              (type(self).__name__, name))
   2023 
   2024     def __setattr__(self, name, value):

AttributeError: 'DataFrame' object has no attribute 'meta'
hughesadam87 commented 11 years ago

Thanks y-p.

Ya, I'm beginning to see the amount hacking required to do something like this. I'll stick with the composition method.

dalejung commented 11 years ago

So I've been playing with the subclassing/composition stuff.

https://github.com/dalejung/trtools/blob/master/trtools/tools/composition.py

The simple use-case I've been using is a return dataset. I have ReturnFrame/ReturnSeries with attribute/methods to work specifically on returns. So far, it seems to be useful if only to save me from typing (1+returns) so often. As expected, I ran into the metadata issue where something like type(returns > .01) == ReturnSeries # true occurs which makes no sense.

I also ran into the issue where a DataFrame will lose series class/metadata when added. I had to create a subclass that dumbly stores the class/metadata in dicts and rewrap on attr/item access.

https://github.com/dalejung/trtools/blob/master/trtools/core/dataset.py

It's been a couple days and I haven't run into any real issues outside of not initially understanding the numpy api. However, subclassing the pd.DataFrame, while necessary to trick pandas into accepting the class, makes things messy and requires using __getattribute__ to ensure I'm wrapping the return data.

I'm starting to think that having a __dataframe__ api would be a good choice. It would allow composition classes to gain a lot of the re-use and simplicity of subclassing while avoiding the complications. Supporting subclassing seems like a open-ended commitment for pandas. However, having a dataframe api would allow custom classes to easily hook into pandas while maintaining the contract that pandas only know and deals with pandas.DataFrames.

hughesadam87 commented 11 years ago

Hi dalejung. This looms pretty cool. On a vacation so cant play with it but can you explain exactly what you mean by a dataframe api? Are you meaning an api for customizing a dataframe or am I misunderstanding? On Dec 31, 2012 11:18 AM, "dalejung" notifications@github.com wrote:

So I've been playing with the subclassing/composition stuff.

https://github.com/dalejung/trtools/blob/master/trtools/tools/composition.py

The simple use-case I've been using is a return dataset. I have ReturnFrame/ReturnSeries with attribute/methods to work specifically on returns. So far, it seems to be useful if only to save me from typing (1+returns) so often. As expected, I ran into the metadata issue where something like type(returns > .01) == ReturnSeries # true occurs which makes no sense.

I also ran into the issue where a DataFrame will lose series class/metadata when added. I had to create a subclass that dumbly stores the class/metadata in dicts and rewrap on attr/item access.

https://github.com/dalejung/trtools/blob/master/trtools/core/dataset.py

It's been a couple days and I haven't run into any real issues outside of not initially understanding the numpy api. However, subclassing the pd.DataFrame, while necessary to trick pandas into accepting the class, makes things messy and requires using getattribute to ensure I'm wrapping the return data.

I'm starting to think that having a dataframe api would be a good choice. It would allow composition classes to gain a lot of the re-use and simplicity of subclassing while avoiding the complications. Supporting subclassing seems like a open-ended commitment for pandas. However, having a dataframe api would allow custom classes to easily hook into pandas while maintaining the contract that pandas only know and deals with pandas.DataFrames.

— Reply to this email directly or view it on GitHubhttps://github.com/pydata/pandas/issues/2485#issuecomment-11779697.

hughesadam87 commented 11 years ago

Hi guys,

I noticed that one way to take a bunch of the dataframe behavior is to overwrite the getattr call. EG:

class Foo(object):

  def __init__(self):
      self.df=DataFrame([1,2,3],[3,4,5])

   def __getattr__(self, attr):
       return getattr(self.df, attr)    

   def __getitem__(self, key):
       ''' Item lookup'''
       return self.df.__getitem__(key)     

This way, many important methods and attribtutes from a dataframe (like ix, apply, shape) seem to just work out of the box on my Foo object. This saves me a lot of manual effort in promoting the attributes and methods that I want to directly work on Foo. Do you guys anticipate any big errors or problems that this could introduce?

hughesadam87 commented 11 years ago

I just wanted to add to this thread that I put in a pull request for a composition class that attempts to mimic a dataframe in the most general way as possible.

https://github.com/pydata/pandas/pull/2695

Although I noticed DaleJung's implementation may work better, but I can't test it because I'm getting an error with lots of tracebook:

AttributeError: 'module' object has no attribute 'BufferedIOBase' File "/home/hugadams/Desktop/trttools/trtools-master/trtools/tools/composition.py", line 4, in import pandas as pd File "/usr/local/EPD/lib/python2.7/site-packages/pandas/init.py", line 27, in from pandas.core.api import * File "/usr/local/EPD/lib/python2.7/site-packages/pandas/core/api.py", line 13, in from pandas.core.series import Series, TimeSeries File "/usr/local/EPD/lib/python2.7/site-packages/pandas/core/series.py", line 3120, in import pandas.tools.plotting as _gfx File "/usr/local/EPD/lib/python2.7/site-packages/pandas/tools/plotting.py", line 21, in import pandas.tseries.converter as conv File "/usr/local/EPD/lib/python2.7/site-packages/pandas/tseries/converter.py", line 7, in import matplotlib.units as units File "/usr/local/EPD/lib/python2.7/site-packages/matplotlib/init.py", line 151, in from matplotlib.rcsetup import (defaultParams, File "/usr/local/EPD/lib/python2.7/site-packages/matplotlib/rcsetup.py", line 20, in from matplotlib.colors import is_color_like File "/usr/local/EPD/lib/python2.7/site-packages/matplotlib/colors.py", line 54, in import matplotlib.cbook as cbook File "/usr/local/EPD/lib/python2.7/site-packages/matplotlib/cbook.py", line 11, in import gzip File "/usr/local/EPD/lib/python2.7/gzip.py", line 36, in class GzipFile(io.BufferedIOBase):

DaleJung, if you are going to continue to work on this at some point in the future, can you let me know (hugadams@gwmail.gwu.edu)? I don't want to submit my pull request if your solution turns out to make more sense.

Thanks.

dalejung commented 11 years ago

That error looks like a working directory issue. Likely my io directory is pre-emptying the base io module. Try importing from a different dir?

The composition class is something I actively used and will update as I run into issues. The reality is that it's a complete hack. I'd be wary of promoting its use without knowing internally what it's doing to masquerade around as a DataFrame.

hughesadam87 commented 11 years ago

Thanks dalejung. I will try to get it working.

Do you have a file in here that runs a series of tests on your object, such as the ones mentioned? If so, do you mind if I borrow it and try to run some tests on my implementation? You said that yours was a complete hack. I don't think my crack at it is necessarily a hack, but probably is not optimal, if that makes sense.

dalejung commented 11 years ago

I've started making more use of the subclassing lately and so I moved it to a separate project.

https://github.com/dalejung/pandas-composition

I'm writing more test coverage. I usually subclass for very specific types of data so my day to day testing is constrained.

http://nbviewer.ipython.org/5864433 is kind of the stuff I use it for.

hughesadam87 commented 11 years ago

Thanks for letting me know Dale.

I'm at scipy, and at one of the symposia, some people were really interested in the ability to subclass dataframes. Do you think this will be merged into future releases, or will be kept apart from the main repo?

It's probably better implemented than my working solution, so looking forward to it.

On Tue, Jun 25, 2013 at 11:18 PM, dalejung notifications@github.com wrote:

I've started making more use of the subclassing lately and so I moved it to a separate project.

https://github.com/dalejung/pandas-composition

I'm writing more test coverage. I usually subclass for very specific types of data so my day to day testing is constrained.

http://nbviewer.ipython.org/5864433 is kind of the stuff I use it for.

— Reply to this email directly or view it on GitHubhttps://github.com/pydata/pandas/issues/2485#issuecomment-20026359 .

dalejung commented 11 years ago

Seems implausible to be honest. While I tried to make it as robust as possible, pandas-composition works the way I think it should. Decisions like

were easy to make for my use case. These are decisions that would have to be better thought out if included in pandas. There's a reason it hasn't been included thus far.

Take the hdf5 for example. I chose to store the numerical data in hdf5 and the metadata separately. So it works more like a bundle on MacOSX. You wouldn't get a subclass from the hdf5, you'd just get a DataFrame.

Plus, I still think it'd be better to create a dataframe api where a class can expose a DataFrame representation for pandas operation. So instead of checking isinstance(obj, DataFrame) and operating on obj, we'd check for __dataframe__, call it, and operate on its return.

That would make composition simple.

hughesadam87 commented 11 years ago

I see what you're saying. Here's a pretty hacked up solution that I ended up using for my spectroscopy package.

https://github.com/hugadams/pyuvvis/blob/master/pyuvvis/pandas_utils/metadframe.py

It basically overloads as many operators as possible, and defers attribute calls to the underlying df object that is stored in the composite class. It is semi-hacky in that I just tried to overload and redirect until every operations that I used in my research worked. If you can provide any feedback, especially constructive, on its implementations or obvious design flaws, I'd be very appreciative.

(Also, it's called MetaDataFrame, but I realize this is a poorly chosen name.)

At Scipy this year, pandas was quite popular. I can feel it spilling over to other domains, and people are running up against the same problems we are. More folks have already began implementing their own solutions. I feel that this is the time that at least giving these people a starting point would be really well-received. Even if we acknowledge that they will need to customize, and it won't be part of pandas, it would be nice to have something to start with. At the very minimum, having a small package and a few docs that say "here's how we implemented it and here's all the caveats" would be a nice starting point for others looking to customize pandas data structures.

On Thu, Jun 27, 2013 at 7:24 PM, dalejung notifications@github.com wrote:

Seems implausible to be honest. While I tried to make it as robust as possible, pandas-composition works the way I think it should. Decisions like

  • whether to propagate meta-data
  • whether df['name'] = s should overwrite s.name
  • not supporting HDF5 directly.

were easy to make for my use case. These are decisions that would have to be better thought out if included in pandas. There's a reason it hasn't been included thus far.

Take the hdf5 for example. I chose to store the numerical data in hdf5and the metadata separately. So it works more like a bundle on MacOSX. You wouldn't get a subclass from the hdf5, you'd just get a DataFrame.

Plus, I still think it'd be better to create a dataframe api where a class can expose a DataFrame representation for pandas operation. So instead of checking isinstance(obj, DataFrame) and operating on obj, we'd check for dataframe, call it, and operate on its return.

That would make composition simple.

— Reply to this email directly or view it on GitHubhttps://github.com/pydata/pandas/issues/2485#issuecomment-20163856 .

jtratner commented 11 years ago

well, a start could be defining a set of functions like those that @cpcloud abstracted out while cleaning up data.py (is_dataframe, is_series, is_panel...etc.) which could then be overridden/altered more easily than a ton of isinstance checks all over the place.

On Thu, Jun 27, 2013 at 8:48 PM, Adam Hughes notifications@github.comwrote:

I see what you're saying. Here's a pretty hacked up solution that I ended up using for my spectroscopy package.

https://github.com/hugadams/pyuvvis/blob/master/pyuvvis/pandas_utils/metadframe.py

It basically overloads as many operators as possible, and defers attribute calls to the underlying df object that is stored in the composite class. It is semi-hacky in that I just tried to overload and redirect until every operations that I used in my research worked. If you can provide any feedback, especially constructive, on its implementations or obvious design flaws, I'd be very appreciative.

(Also, it's called MetaDataFrame, but I realize this is a poorly chosen name.)

At Scipy this year, pandas was quite popular. I can feel it spilling over to other domains, and people are running up against the same problems we are. More folks have already began implementing their own solutions. I feel that this is the time that at least giving these people a starting point would be really well-received. Even if we acknowledge that they will need to customize, and it won't be part of pandas, it would be nice to have something to start with. At the very minimum, having a small package and a few docs that say "here's how we implemented it and here's all the caveats" would be a nice starting point for others looking to customize pandas data structures.

On Thu, Jun 27, 2013 at 7:24 PM, dalejung notifications@github.com wrote:

Seems implausible to be honest. While I tried to make it as robust as possible, pandas-composition works the way I think it should. Decisions like

  • whether to propagate meta-data
  • whether df['name'] = s should overwrite s.name
  • not supporting HDF5 directly.

were easy to make for my use case. These are decisions that would have to be better thought out if included in pandas. There's a reason it hasn't been included thus far.

Take the hdf5 for example. I chose to store the numerical data in hdf5and the metadata separately. So it works more like a bundle on MacOSX. You wouldn't get a subclass from the hdf5, you'd just get a DataFrame.

Plus, I still think it'd be better to create a dataframe api where a class can expose a DataFrame representation for pandas operation. So instead of checking isinstance(obj, DataFrame) and operating on obj, we'd check for dataframe, call it, and operate on its return.

That would make composition simple.

— Reply to this email directly or view it on GitHub< https://github.com/pydata/pandas/issues/2485#issuecomment-20163856> .

— Reply to this email directly or view it on GitHubhttps://github.com/pydata/pandas/issues/2485#issuecomment-20164504 .

jtratner commented 11 years ago

@hugadams have you considered trying to refactor the DataFrame code to replace calls to DataFrame to _constructor and possibly adding other class calls (like Series to _series and Panel to _panel) which would (internally) return the objects to use to create elements (so, in many methods, instead of Series(), could use self._series()), etc. In particular, this would work because it might work in pandas core and be useful.