Closed 987Nabil closed 1 month ago
/bounty $175
/attempt #3029
with your implementation plan/claim #3029
in the PR body to claim the bountyThank you for contributing to zio/zio-http!
Add a bounty • Share on socials
Attempt | Started (GMT+0) | Solution |
---|---|---|
🟢 @987Nabil | Aug 30, 2024, 9:13:28 PM | #3074 |
🟢 @varshith257 | Aug 31, 2024, 11:00:15 AM | #3073 |
/attempt #3029
Algora profile | Completed bounties | Tech | Active attempts | Options |
---|---|---|---|---|
@987Nabil | 80 ZIO bounties + 1 bounty from 1 project |
Scala |
Cancel attempt |
/attempt #3029
Algora profile | Completed bounties | Tech | Active attempts | Options |
---|---|---|---|---|
@varshith257 | 3 ZIO bounties + 6 bounties from 4 projects |
TypeScript, Go |
Cancel attempt |
[!NOTE] The user @987Nabil is already attempting to complete issue #3029 and claim the bounty. We recommend checking in on @987Nabil's progress, and potentially collaborating, before starting a new solution.
💡 @987Nabil submitted a pull request that claims the bounty. You can visit your bounty board to reward.
🎉🎈 @987Nabil has been awarded $175! 🎈🎊
I think we can do the following, to avoid allocating to many codecs that are more or less all the same.
HttpContentCodec
should cache the crated instances. This will reduce the amount of codecs in the running server and also improve the efficiency of thelookupCache
ConcurrentHashMap
. I am not sure if it is needed, since a concurrent update would only lead to a cache miss. Might be okay to just keep it as is.HttpContentCodec
a sealed trait and have aFiltered
subtype, that keeps a view on an actualHttpContentCodec
instance. This way, when we callonly
to limit the codec to a certain media type, we can still use thelookupCache
of the original codec.