Open lf- opened 2 years ago
Sounds good. 👍
In an ideal world we would derive the schema and documentation from the same representation that we use for parsing. However, I'm not sure how practical this is right now.
I think this are actionable items:
I haven't used it before, but you may be able to use Autodocodec for this.
We currently have our own ad-hoc generic machinery in
Data.Aeson.Config.FromValue
that:
Not sure what's the situation here with other implementations + ideally, we would want to derive the reference (those tables in our current README) from that same representation.
@NorfairKing just in case, are these things within the scope of Autodocodec? Where are the tests located in the source tree btw? I just took a look but only saw a Doctest driver.
@NorfairKing just in case, are these things within the scope of Autodocodec?
The warnings are in scope but not implemented.
The field aliases are already supported.
Deriving a json schema is the exact use-case of autodocodec.
Even better would be a human-readible syntax-highlighted schema that you get with autodocodec-yaml
.
Where are the tests located in the source tree btw? I just took a look but only saw a Doctest driver.
The readme has a section on this: https://github.com/NorfairKing/autodocodec#tests
@NorfairKing looking at the example from the README, links to the generated output (schema, sample JSON, etc) could be a nice addition. I am not on a computer right now, so I can't try. And even when I'm back at the computer, I am swamped with a plethora of other things.
Questions:
HasCodec
instance is used to derive all the other instances. The example still derives a Generic
instance; for what is that instance used/necessary? Or is that Generic
instance unused?HasCodec
, or some other "generic way" to utilize this library?@NorfairKing looking at the example from the README, links to the generated output (schema, sample JSON, etc) could be a nice addition. I am not on a computer right now, so I can't try. And even when I'm back at the computer, I am swamped with a plethora of other things.
Those are here: https://github.com/NorfairKing/autodocodec/tree/master/autodocodec-api-usage/test_resources
The example still derives a Generic instance; for what is that instance used/necessary? Or is that Generic instance unused?
It's not necessary, that's used for another part of the tests.
Is there a generic implementation for HasCodec, or some other "generic way" to utilize this library?
No, and that's the point. You're forced to document every field in the schema (or deliberately circumvent that)..
If there is no generic implementation, have you tried implementing this in a fully generic way? If yes, did you hit any road blocks?
Yes I have. The roadblock is that it's a bad idea because the entire point is to document your implementation, which you are circumventing using a generic implementation.
@NorfairKing looking at the example from the README, links to the generated output (schema, sample JSON, etc) could be a nice addition. I am not on a computer right now, so I can't try. And even when I'm back at the computer, I am swamped with a plethora of other things.
Those are here: https://github.com/NorfairKing/autodocodec/tree/master/autodocodec-api-usage/test_resources
I looked at that, it was just not immediately clear to me which exact files correspond to the example from the README. Not that important, though. Once you understand what the example is doing it's easy to imagine how the schema will look like.
The example still derives a Generic instance; for what is that instance used/necessary? Or is that Generic instance unused?
It's not necessary, that's used for another part of the tests.
Is there a generic implementation for HasCodec, or some other "generic way" to utilize this library?
No, and that's the point. You're forced to document every field in the schema (or deliberately circumvent that)..
Ok, I guess that makes sense. For Hpack specifically we could annotate the fields with type literals (as we already do for aliases) and generically derive the documentation from that. But I think that's only really an option if you use those types for parsing only, and nothing much else.
I like how the encoder encapsulates everything in a single value, btw.
For completeness, I think conceptually this is what we would want:
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE DataKinds #-}
module Person where
import Data.Coerce
import GHC.Generics
import GHC.TypeLits
newtype AnnotatedField (documentation :: Symbol) a = AnnotatedField a
data Annotated
data Parsed
type family Field representation (documentation :: Symbol) a
type instance Field Annotated documentation a = AnnotatedField documentation a
type instance Field Parsed documentation a = a
data Person_ representation = Person {
personName :: Field representation "name of person" String
, personAge :: Field representation "age of person" Int
} deriving Generic
type Person = Person_ Parsed
type AnnotatedPerson = Person_ Annotated
deriving instance Show Person
deriving instance Eq Person
parseAnnotatedPerson :: String -> AnnotatedPerson
parseAnnotatedPerson = genericParse
where
genericParse = undefined -- add generic implementation here
parse :: String -> Person
parse = undefined -- coerce . parseAnnotatedPerson
schema :: AnnotatedPerson -> String
schema = genericSchema
where
genericSchema = undefined -- add generic implementation here
Note that:
ToSchema
, ...). We would probably still want them.Person
and AnnotatedPerson
are representationally equivalent. Hence it should be possible to coerce
one into the other. However, I couldn't convince GHC to do so. So we would need to do one of:
coerce
.unsafeCoerce
, provided that we can encapsulate it in a way that provides a safe API to the user.For Hpack we would need to extend AnnotatedField
with additional information (cabal name, aliases, deprecation..).
I speculate unconfidently that you might need a usage of the RoleAnnotations extension to make that coercion work: https://gitlab.haskell.org/ghc/ghc/-/wikis/roles
It would be super cool if there was a JSON schema for hpack, which would automatically enable IDE tools to provide autocompletion and checking of hpack files. I'm motivated for this by my terrible memory for the syntax of both cabal files and hpack files.
I am filing this as a good-first-issue kind of thing, not a request for you to write it necessarily.