k0001 / pipes-binary

Encode and decode binary streams using the pipes and binary libraries.
Other
7 stars 12 forks source link

Add encodeMany #7

Closed quchen closed 11 years ago

quchen commented 11 years ago

encodeMany is the pipe version of encode. It continuously reads from upstream, and sends ByteString-encoded data downstream.

Simple example use case: a chat (sending) client. To keep it simple the networking has been removed, so it's just printing the message and its encoded version.

module Main (main) where

import Pipes
import Data.Binary (Binary)
import qualified Data.ByteString as BS
import qualified Pipes.Prelude as P
import qualified Pipes.Binary as P

main :: IO ()
main = do
      putStrLn "encodeMany demonstration"
      putStrLn "Enter \"quit\" to quit"
      runEffect $ P.stdinLn >-> handle >-> encodeMany >-> send

handle :: Pipe String String IO ()
handle = P.takeWhile (/= "quit") >-> P.mapM dispatch
      where dispatch :: String -> IO String
            dispatch msg = do
                  putStrLn $ "Message: \"" ++ msg ++ "\""
                  return msg

-- | Continuously encodes the given 'Bin.Binary' instance and sends each result
--   downstream in 'BS.ByteString' chunks.
encodeMany :: (Monad m, Binary x) => Pipe x BS.ByteString m r
encodeMany = for cat P.encode

send :: (MonadIO io) => Consumer BS.ByteString io ()
send = P.show >-> P.stdoutLn
quchen commented 11 years ago

Travis fails, but I swear I'm innocent! :-/

k0001 commented 11 years ago

I'm not a big fan of adding a function that does just for cat encode in this case, the reason being that for cat encode is really easy to write and the Pipes tutorial does a really good job on explaining how to use Producers like encode on its own. On the other hand, I don't really oppose adding encodeMany, but it seems to me that there's really no need to do it, and perhaps a bit of documentation next to encode saying that you can use it as for cat encode suffices. In fact, I created issue #3 sometime ago talking about adding this documentation, but I've been really busy these last weeks so I didn't get a chance to update that just yet.

What do you think?

Also, don't worry about the Travis build. It fails because this package depends on pipes-parse==2.1.0 now, which isn't on Hackage yet. Maybe I can instruct Travis to use cabal-meta to build the package somehow, so that it always grabs the latest dependencies directly from their GIT repositories.

k0001 commented 11 years ago

@quchen Do you think the changes made in pull request #8 would work, instead of adding encodeMany?

quchen commented 11 years ago

I see your point, but I think having it as an explicit Pipe is coneptually closer to what the desired behaviour of encoding is: Data flows downward, and encodeMany transforms everything to binary. I actually looked around in the whole library because I could not believe this behaviour wasn't in there somewhere already. I would also prefer this behaviour for the decoding part, but decodeMany is already taken (and the implementation would probably require bidirectional pipes here, so I can see a reason for not having it.)

And as for "it's so basic it doesn't need its own function", that's sort of what the entirety of Pipes.Prelude does.

quchen commented 11 years ago

Oh, and I just realized that having it pipe-based also works well when using zlib pipes a la encodeMany >-> compress >-> toSocket s. That brings up the question again whether a decoding pipe would be a good addition. I also think this entire thing is much in the spirit of Pipes' modularity (represented by pipe composition) instead of mutation (which is what "pipe transforming functions" like decodeMany) do.

k0001 commented 11 years ago

Writing a decoding Pipe is a different story than writing an encoding one. In the case of pipes-zlib it works because there are no individual entities that need to be parsed before being forwarded downstream, but instead, each chunk in the stream is treated equally and forwarded downstream immediately after decompressing it, and also, one doesn't need to consider boundaries between content or end-of-input. That is, pipes-zlib doesn't need the features provided by pipes-parse which force your decoding tools to be written as functions of Producers.

Regarding encodeMany, the given example can be written as for cat encode >-> compress >-> toSocket s and that approach is encouraged by Pipes in general. encode on it's own is more usable than for cat encode packaged in some other function, and using it in different ways, such as this one, is straightforward enough that there's not really a need to package these different combinations. I think Pipes users are better served by understanding how to use basic tools such as (>->), for, cat, (~>) or each to combine different Proxys together, and I also think that the Pipes tutorial does a great work encouraging that.

Also, notice that in Pipes.Prelude, the only function that does this for cat bar trick where bar is another function already exported is concat, which is implemented as for cat each (something justifiable, given that it might not be an obvious implementation).

I think for now I'd like to just change the documentation, encouraging the use of for cat encode and providing some examples. There's always time to add more functions later if we decide to change our minds. I hope you find this approach good for the time being.

quchen commented 11 years ago

There's also for cat (each . f) now ;-)

Anyway, I think we both have valid points here and you're the maintainer, so the documentation compromise is good enough. (And thanks for the explanation of the difference to zlib pipes, I'm sure remembering that copression is only pipe-chunk-wise will be useful some day.)

k0001 commented 10 years ago

@quchen do you mind if I mention your name in the PEOPLE file where I mention everyone who has somehow contributed to the development of this library?

quchen commented 10 years ago

@k0001 Sure, go ahead if you want. My contribution was quite small though, so maybe don't use capital letters or something ;-)

k0001 commented 10 years ago

Thank you :)