asyncapi / community

AsyncAPI community-related stuff.
https://www.asyncapi.com/community
101 stars 109 forks source link

List of Proposals for GSoC 2022 #272

Closed derberg closed 2 years ago

derberg commented 2 years ago

We highly recommend you use GitHub GSoC Issues link to get a list of all proposed GSoC tasks.

To make the list available also for not logged in GitHub users, have a look at the list below:

Automatically generate docs

https://github.com/asyncapi/glee/issues/261

When describing the benefits of a spec-driven framework like Glee, I'm always mentioning the fact that we could have docs and code always in sync. However, we're not generating any docs, we're leaving this responsibility to the user, which kinda breaks the purpose 😅 \r\n\r\n#### Description\r\n\r\nWe should provide a way to automatically generate docs. Here are the ways that come to my mind:\r\n\r\n1. Start an HTTP static server and serve the docs there. For instance, http://localhost:3000/docs would show the docs. Fully configurable from the glee.config.js file.\r\n2. Generate docs in a folder inside the project. Fully configurable from glee.config.js file, like the path to the folder and the Generator template to use. We can default to docs and the https://github.com/asyncapi/markdown-template/.\r\n3. A Github Action (maybe Gitlab CI too?) that calls a webhook URL (or multiple ones?) to update external systems, like developer portals. E.g., a POST call to a URL containing the AsyncAPI definition. This should be a separate package though."

Add support for HTTP servers and clients

https://github.com/asyncapi/glee/issues/260

Currently, Glee only supports WebSocket and MQTT protocols. That makes it unusable for someone wanting to use any other protocol, like HTTP.\r\n\r\n\r\n#### Description\r\n\r\nGlee should be able to create an HTTP server (API) or a client out of an AsyncAPI definition. We can probably use the same x-kind extension defined in #259, as a way to understand when we want a client and when we want a server.\r\n\r\nOne key part of this issue is to actually work on the HTTP binding. Especially, in this issue about adding support for multiple methods.\r\n

Add support for WebSocket clients

https://github.com/asyncapi/glee/issues/259",

Currently, Glee only supports creating a WebSocket server. That makes it unusable when someone wants to build a WebSocket client to connect to an existing WS server.\r\n\r\n\r\n#### Description\r\n\r\nWe already have support for WebSocket but it's always assuming we want to create a server, and that's plain wrong. Instead, Glee should default to creating a WS client and only create a server when we mark the server definition in the AsyncAPI file as \"local\".\r\n\r\nFor instance:\r\n\r\nyaml\r\nasyncapi: 2.3.0\r\nservers:\r\n myWebsiteWebSocketServer:\r\n url: 'ws://mywebsite.com/ws'\r\n protocol: ws\r\n x-kind: local\r\n someProductWSServer:\r\n url: 'ws://someproduct.com/ws'\r\n protocol: ws\r\n\r\n\r\nThe server myWebsiteWebSocketServer is a WS server we want to create with Glee but the server someProductWSServer is a remote server we want to send/receive information to/from. The former is what's already implemented and the latter is what we should be implementing as part of this issue. Notice the x-kind: local on the myWebsiteWebSocketServer server. Only when this is found, we should create a server, otherwise, we should default to a client. Another allowed value for x-kind is remote.\r\n\r\n#### Notes\r\n\r\nx-kind is an extension we're creating as part of this issue, this is not defined anywhere yet so we have some freedom defining how we want it to be. I'm trying to make it as similar as possible to the Remotes and local servers issue on the spec. In v3, we may end up having two root keys: servers (local servers) and remotes (remote servers or brokers). For v2.x (the current version), I think it's just fine if we add the x-kind extension but I'm happy to listen for alternatives."

Add support for AMQP 1.0

https://github.com/asyncapi/glee/issues/258

Currently, Glee only supports WebSockets (server) and MQTT protocols. This means it's unusable for those using Kafka, AMQP, or any other different protocol.\r\n\r\n\r\n#### Description\r\n\r\nWe should add support for AMQP 1.0. In theory, it should be just a matter of adding a new adapter in the adapters folder. Beware AMQP 1.0 and AMQP 0-9-1 are totally different protocols, despite having the same name.\r\n"

Add support for AMQP 0-9-1

https://github.com/asyncapi/glee/issues/257

Currently, Glee only supports WebSockets (server) and MQTT protocols. This means it's unusable for those using Kafka, AMQP, or any other different protocol.\r\n\r\n\r\n#### Description\r\n\r\nWe should add support for AMQP 0-9-1. In theory, it should be just a matter of adding a new adapter in the adapters folder.\r\n"

Add support for Kafka

https://github.com/asyncapi/glee/issues/256

Currently, Glee only supports WebSockets (server) and MQTT protocols. This means it's unusable for those using Kafka, AMQP, or any other different protocol.\r\n\r\n\r\n#### Description\r\n\r\nWe should add support for Kafka. In theory, it should be just a matter of adding a new adapter in the adapters folder. However, we should also take into account that Kafka supports exactly-once semantics (unlike MQTT and WebSockets) and, therefore, it may be a good idea to address asyncapi/server-api#27 first, although not required.\r\n\r\n#### References\r\n\r\n https://www.baeldung.com/kafka-exactly-once\r\n https://www.confluent.io/blog/exactly-once-semantics-are-possible-heres-how-apache-kafka-does-it/"

Integrate Testing

https://github.com/asyncapi/glee/issues/255

To test Glee applications, developers have to either rely on 3rd party tools or write their tests. This could be hugely simplified if Glee shipped with automated testing features.\r\n\r\n#### Description\r\nDevelopers would write tests within a Glee project in a separate directory. While writing tests, developers should only focus on the message being sent to the server and the response provided by the server or the actions the server does. Everything else would be handled by Glee. \r\n\r\nThere could be multiple options in the command for specifying the type of test to run. Also, snapshots could be saved for future snapshot testing if specified by the user."

Library for easier integration testing of code generators

https://github.com/asyncapi/generator/issues/752

It is not so hard to write code generator, it is harder to maintain it` ~ Abraham Lincoln\r\n\r\nWhen you write a code generator, you always write it with some use case in mind. You test it manually quite a lot before the initial release. You do it basing on some AsyncAPI file, sometimes crafted for the purpose of the code generator, in most cases it is a specific version of the official streetlight example. \r\n\r\nThe problem comes with the next PRs that follow. New features that are added, etc. You get contributions about the new protocol that gets supported or support for some feature of AsyncAPI that was not yet supported. You can check if unit tests are added for specific code generation to check if good code will be generated. You can even add snapshot tests to check if generated files look exactly as you wanted them to look like. The problem is that you are never sure if generated code will still work for the previous scenario unless you manually generate an app and test it with a given broker.\r\n\r\n#### Description\r\n\r\nAs a maintainer of specific template that can generate code, I'd like to have feature in Generator or maybe a separate library that I can easily enable in my repository and enable integration tests for my template:\r\n\r\n- Might be that we should just enable this testing feature through https://github.com/asyncapi/generator/blob/master/docs/authoring.md#configuration-file but also might be we need another standalone tool for it (this is most probable)\r\n- As template developer I do not want to write tests like https://github.com/asyncapi/nodejs-template/blob/master/test/integration.test.js. I want it all provided to me by default:\r\n - by default have test snapshot test that check if generated files match previous snapshot\r\n - I should be able to specify location of AsyncAPI file that I want to test against\r\n - I should be able to specify template parameters used in a given test\r\n - have the option to opt out certain files from test\r\n - have the option to specify that I expect a specific file to contain specific \"text\" inside generate file\r\n- As a template developer I want to have a solution in place that will take my AsyncAPI file and generate \"application\", start a broker if needed, and perform a test operation that will evaluate if generated application is really sending or receiving expected message. Maybe we can integrate https://microcks.io/ ?\r\n\r\n#### For GSoC participants\r\n\r\n- you will code with JS or TS\r\n- you will work on a solution that will be used by template maintainers across AsyncAPI org\r\n- you will have a chance to learn in details how to write testing library\r\n- you will have a chance to work with docker, virtualization and testing automation"

Measuring AsyncAPI spec adoption

asyncapi/website#780

We do not know how many people use AsyncAPI. The most accurate number we could get is the amount of the AsyncAPI users that work with AsyncAPI documents. But how measure how many people out there created/edited AsyncAPI file?\r\n\r\nThe answer is a solution that includes:\r\n- SchemaStore\r\n- promotion of using asyncapi in a filename created using AsyncAPI spec\r\n- having a server where we expose all JSON Schemas provided for AsyncAPI\r\n- storing somewhere info whenever JSON Schema is fetched by users, so we can count it as \"usage\"\r\n\r\n

Some more discussion -> https://asyncapi.slack.com/archives/C0230UAM6R3/p1622198311005900\r\n\r\n#### Description\r\n\r\n1. Create new endpoint in server-api service that anyone can use to fetch AsyncAPI JSON Schema files of any version\r\n2. JSON schemas are in https://github.com/asyncapi/spec-json-schemas and can be used as normal dependency\r\n3. Whenever JSON Schema file is fetched by the user, information should be stored somewhere. I propose Google Tag Manager as we already have it for the website, we can send data there and then easily read data. I'm all ears if there is something better and still free \r\n4. Add AsyncAPI config to SchemaStore and have a configuration on AsyncAPI side that will always automatically open a PR against SchemaStore to provide a new location of a new version of the JSON Schema for the new AsyncAPI spec version\r\n5. Update docs and instructions for users how to configure IDE properly and how to name files. Update official examples\r\n\r\nIf time left, we need to expose numbers somewhere. Either embed Google Analytics diagram somewhere on the AsyncAPI website or just have at least an API endpoint that exposes the latest numbers.\r\n\r\n#### For GSoC participates\r\n\r\n- you get to code TS in a service that is publicly available and you are sure your work will be consumed by thousands of people\r\n- you will learn automation with GitHub Actions\r\n- you will have a chance to learn how to integrate with different services, like Google API, unless you find a better solution and better API to use\r\n- you will learn in-depth how autocompletion in IDEs is done with SchemaStore\r\n"

Create Repository Settings Keeper

https://github.com/asyncapi/.github/issues/137

We have 60 repositories in AsyncAPI organization and it is impossible to stay in sync with settings across all repositories. \r\n\r\nWe have https://github.com/asyncapi/.github/blob/master/repository-settings.md but it is completely ignored, thus also no longer updated.\r\n\r\n#### Description\r\n\r\nWe need an application that enables us to manage the settings of the GitHub repository through a config file stored in a given repository.\r\n\r\n- Imagine you have a file in a repo that is called .projects.settings.keeper \r\n- it is a yaml file with structure that has info about:\r\n - discussions: true - that enables discussions tab for project\r\n - pr: ['squashandmerge'] - that enables only squash and merge on PRs\r\n - there should be a list of branch protection settings and default workflows that should be blocking PRs\r\n- there also are extra settings like sonarcloud: true or coveralls: true that means the application should make sure sonarcloud or coveralls are enabled for a given project\r\n- basing on CODEOWNERS the app adds given users as maintainers of the repo\r\n- once .projects.settings.keeper is created, global workflow synchronization is triggered to get default workflows into the repo\r\n- .github repo we should have a recommended .projects.settings.keeper for every repo. We should explore if there is an event, like new repo created that would automatically add .projects.settings.keeper to newly created repo\r\n\r\n## For GSoC participants\r\n\r\n- you will write some JS code\r\n- you will learn GitHub Actions and GitHub API\r\n- you will play a lot with different REST APIs to integrate them together"

Rewrite this template and NodeJS WS template to new react rendering engine

https://github.com/asyncapi/nodejs-template/issues/133

This and also nodejs-ws-template are writing with old Nunjucks templating engine. We need to rewrite them to React templating engile that should become the default and only engine in the future.\r\n\r\n#### Description\r\n1. First rewrite nodejs-ws-template as this is a template that is more used as showcase, for demos, so any mistake in the work and release will not cause issues for users in production.\r\n2. Then rewrite this template to new engine\r\n\r\nWith this task you will learn in dept:\r\n- how code generation works\r\n- what is the purpose of code generation and what you can generate\r\n- you will learn the structure of AsyncAPI spec, its purpose and what does it give you\r\n- you will write JavaScript code and unit tests\r\n- you will learn AsyncAPI parser and use it extensively "

Drag&drop AsyncAPI block builder

https://github.com/asyncapi/studio/issues/265

Currently we have in the studio Visualiser which renders the flow of operations in an application described using AsyncAPI. Unfortunately it only works in readonly mode.\r\n\r\nA very good solution would be to add the ability to create AsyncAPI specifications from scratch (or update existing one) using drag and drop blocks, which could be connected, similar to the current blocks in visualiser.\r\n\r\nProject assumptions:\r\n- new tool should be written in React (in TS) using https://github.com/wbkd/react-flow library.\r\n- there should be a registry of blocks that can be dragged\r\n- the blocks should reflect the real structure of AsyncAPI, but they should not be too scattered, i.e. blocks for Message Object, Operation Object are ok, but a block for Description etc is not acceptable.\r\n- each block should be able to edit the data it contains, e.g. Operation block should have possibility to choose kind of operation and corresponding message(s) etc.\r\n- should be information that something is badly designed or missing required data in the block.\r\n- to make generic solution (support 2.X.X and 3.X.X and up versions) project should be based on JSON Schema of AsyncAPI spec if it's possible\r\n\r\nWhether that new tool should be standalone (as a separate project) or integrated in the Studio - to be discussed.\r\n\r\nScope of the project may change by June, the above is just an overview of what needs to be done. Feel free to start discussion if you have questions :) \r\n\r\nTo open the current Visualiser please go to the https://studio.asyncapi.com/ and choose fourth node on the left navigation and you should see blocks :)"

Website UI Kit design/dev project

https://github.com/asyncapi/design-system/issues/4

Introduction\r\nWhile updating the visual style of the website in this PR, I noticed that the website is lacking a way to keep repeated elements looking cohesive. In other words, there are a lot of places where components need to be created for even the smallest of elements that repeat on the website. Tailwind CSS is great, but I think only if used properly by applying the class names within a component instead of copying and pasting the classes every time you want to create a new element that uses the same visual style as something else.\r\n\r\n## So you might ask: What are the steps that we should take to resolve this problem?\r\nIt may seem like this is only a problem within the code, but I think we can use this problem to start defining design patterns across the site and see what we should keep, what to discard because it is redundant, and what styles we can make adjustments to.\r\n\r\nThis issue will require 2 types of work:\r\ndesign - this will be labelled next to the step that requires work in Figma\r\ndevelopment - this label will be added next to the step that requires coding\r\n\r\n--------\r\n\r\n### Step 1: Audit all design patterns on the website currently [design]\r\nThis is imperative so that we have an \"inventory\" of all the things that exist currently on the website to get an idea of what we are missing or need to improve the design of. At this stage, all visual elements on the website are sorted into buckets and we discuss what elements we need to add/remove/improve.\r\n\r\n--------\r\n\r\n### Step 2: Create components in Figma [design]\r\nAt this stage, we will start by assembling the smallest components (atoms) together to make larger components (molecules), and then assembling the larger components together to make complex components (organisms).\r\n\r\n--------\r\n\r\n### Step 3: Finalize components and develop in Storybook [design] [development]\r\nOnce we have a finalized version of all the components and their various states in Figma, we can begin to develop them within the Storybook of this repo. The collaboration between design and development here is important to the success of the working components. We will need to make sure the components are engineered to be dynamic.\r\n\r\n--------\r\n\r\n### Step 4: Test components, gather feedback, iterate on design [design] [development]\r\nIt is important to test the components and make sure that everything is working as expected. If a design when translated into code is not working out as planned or has failures, we can use this step to make any necessary changes.\r\n\r\n--------\r\n\r\n### Step 5: Document appropriate usage of components [design] [development]\r\nOnce we are finalized on our set of components, we will then need to document its usage both from a design perspective and an engineering perspective so that we ensure cohesiveness across community contributions.\r\n\r\n--------\r\n\r\nAs always, feedback is welcome on this proposal! 😄 \r\n"

An interface/project for describing errors/problems in tools in our organization.

https://github.com/asyncapi/community/issues/266

We have a lot of tools in our organization. Some of them have their own errror/problem trading system, some don't have it at all and only throw exceptions from the main function. I think we should standardize this.\r\n\r\nIn the ServerAPI as well as in the ParserJS we have a such of a system that we can extend and use in another repos.\r\n\r\nWe use in these repos the interface called Problem to describe all errors to not re-invent the wheel defining our \"own error format\".\r\n\r\nCheck out these posts for more context:\r\n- Indicating Problems in HTTP APIs\r\n- Understanding Problem JSON\r\n- Succeeding in Failing - Darrel Miller\r\n\r\nImplementations for that in ServerAPI and in ParserJS are a little different, so we should standarize it. In this task we have to do:\r\n- think where we should have such a functionality, in the new repo or maybe in the existing one - important: we will use new library in separate projects so we cannot accept solution where we will duplicate the code\r\n- write the Problem interface and helpers - we should discuss what helpers we need, but at least we need some function to create problem, retrieve type of problem, merge two or more problems to one etc \r\n- write unit tests\r\n- write docs how to use library and how to create new type of Problem \r\n- create registry of all available problems inside organization - to discuss\r\n- provide TS types, so I recommend to write new library in TS, not in pure JS\r\n- replace existing solution in the ParserJS or in the ServerAPI (to choose) with new one \r\n- create markdown \"template\" which will have structure to describe problems in the Readme of project (like https://github.com/asyncapi/parser-js#error-types)\r\n\r\nNice to have but not needed:\r\n- integrate registry of problems in the ServerAPI - to discuss. As problem have type field we need to have page to describe details of the given type of problem. I think that https://api.server.com/problem/{type} will be good.\r\n\r\nExisting implementation in the ServerAPI -https://github.com/asyncapi/server-api/blob/master/src/exceptions/problem.exception.ts\r\nExisting implementation in the ParserJS - https://github.com/asyncapi/parser-js/blob/master/lib/errors/parser-error.js"

Modelina website

https://github.com/asyncapi/modelina/issues/637

As issue636 focuses on the playground for Modelina, we need to also take a look at how we present Modelina as a whole. This probably means separating https://www.asyncapi.com/tools/modelina into it's own website.\r\n\r\nThis means that we will be able to show documentation, examples, playground, and roadmap for Modelina in one platform. \r\n\r\n#### Documentation\r\nIt is important that everything flows through GitHub, this means that the documentation shown on the website should directly be taken from the docs in https://github.com/asyncapi/modelina/tree/master/docs. This means that we need to figure out how we can render markdown files (dynamically, cause we don't want to manually update the website once changes are made).\r\n\r\n#### Examples\r\nExamples serve as a way to not only test that user can always do the specifics, but also give the user an easy way to see how and why one should use a specific feature. Therefore we need to find a way to dynamically show the different examples in the repository on the website in a way that improves user experience.\r\n\r\nMaybe even with a Try with Playground button 🤔 \r\n\r\n#### Playground\r\nWhile the playground gets improved by somebody else, we need to make it flow nicely between docs/examples and the playground.\r\n\r\n#### Roadmap\r\nSame as AsyncAPI as a whole as a vision, modelina should too, and explicitly show what and why we are pushing certain areas forward. Of course, this vision and roadmap are dynamic as community affects what is and should be focused on, but this gives us a way to display it publically 🙂 \r\n

Improvements to Modelina playground

https://github.com/asyncapi/modelina/issues/636

We should create a more coherent playground that wraps the playful nature as well as being the main place to try out different setups with Modelina.\r\n\r\nThere are so many different improvements we can do so I am just gonna list a bunch that came to mind:\r\n\r\n- Sharing modeling setup through queries, i.e. one could give you the link modelina?generator=typescript&input=A4Gabuqwj... and automatically set the corresponding generator options, as well as input.\r\n- Download models directly from the playground, so you don't have to use the (upcoming) CLI to do it. \r\n- Modelina does not only support AsyncAPI as input, but multiple others, so we should allow such input in the playground.\r\n- Improve UX for how to show the generated models cause if Modelina generates more than 5, it becomes chaotic.\r\n- Make it easier for people to start using the playground, by \"showing\" people around the UI. \r\n- Same as Studio, I think it makes sense to have a template kind of setup to quickly play around with different setups.\r\n

Can you think of anything else we can do to the Modelina playground to improve it?\r\n

Build a React app to visualize GitHub Actions across the organization

https://github.com/asyncapi/.github/issues/136

This may not be the best place for this issue. I am creating this issue here till it finds a home. :smiley: \r\n### Reason/Context\r\nWe are an automation-driven community and we use GitHub Actions to automate lots of things in the organization. GitHub actions aren't unlimited and we need to have a clear picture of what workflows are using what amount of resources and how we can get the most out of GitHub Actions. GitHub currently doesn't provide such a tool to monitor and see the statistics about workflows across the organization.\r\n\r\n\r\n### Description\r\nBasically, we need a web app capable of monitoring and visualizing GitHub actions metrics across the organization by using GitHub API.\r\n#### Features\r\n- The web app needs to be able to visualize statistics about each workflow across the organization. like, average run-time, the average number of runs per day, etc...\r\n- Besides monitoring each workflow, there should be a graph of some kind so the user can compare workflows together to figure out the most resource-consuming workflows.\r\n- It needs to have some kind of caching implemented into it so It doesn't overload the GitHub API.\r\n#### Tech stack\r\n- React.js\r\n- you can use Typescript or Javascript.\r\n- D3.js or a library of your choice.\r\n#### similar projects\r\nI could only find this https://github.com/amirha97/github-actions-stats project which is primitive and visualizes the run-time of actions for one repo.

MVP integration of extensions catalog with AsyncAPI tools to make extension catalog useful

https://github.com/asyncapi/extensions-catalog/issues/78

I don't think there were many situations in the past when someone asked about an option to have a kind of official extension. This feature is still important though. People use extensions and we should support at least the official ones in our tools. There is a lot of work that has to be done and IMHO we need to get work done in kind of stages.\r\n\r\nMVP is where I'd like us to focus on alpha implementation (stage 1), simple solution that will serve us (spec maintainers) and community (folks that want to add new features to the spec) as a place where the possible features for spec can be first implemented as extensions.\r\n\r\nSo RFC like https://github.com/asyncapi/spec/pull/584 can be battle-tested much faster as an extension first to speed up work on adding the feature to the spec later with more confidence.\r\n\r\n#### Description\r\n\r\nWhat needs to be done in short?\r\n\r\n1. I want to be able to create the following doc with x-twitter extension\r\n\r\nasyncapi: '2.3.0'\r\ninfo:\r\n title: Account Service\r\n version: 1.0.0\r\n description: This service is in charge of processing user signups\r\n x-twitter: AsyncAPISpec\r\nchannels:\r\n user/signedup:\r\n subscribe:\r\n message:\r\n $ref: '#/components/messages/UserSignedUp'\r\ncomponents:\r\n messages:\r\n UserSignedUp:\r\n payload:\r\n type: object\r\n properties:\r\n displayName:\r\n type: string\r\n description: Name of the user\r\n email:\r\n type: string\r\n format: email\r\n description: Email of the user\r\n\r\n2. I want AsyncAPI JavaScript parser to provide validation error, after validating x-twitter extension against https://github.com/asyncapi/extensions-catalog/blob/master/extensions/twitter/0.1.0.yaml. The error should tell me that my twitter handle doesn't full a given pattern with @ at the beginning of twitter handle. I want react component to throw this validation error when I check this document in Studio\r\n3. When I fix my document based on the above validation errors, I see react component renders information about twitter handle. MVP should not cover concept of react component plugins for that extension, so the component renders also twitter icon and uses provided twitter handle in link to twitter profile.\r\n4. Using already provided helpers https://github.com/asyncapi/parser-js/blob/master/lib/mixins/specification-extensions.js should be enough for now\r\n5. Extensions should be documented in extensions repo, and then this document should be also available like the spec, in the website. No need for some fancy UI for now, can be done like in case of the spec, markdown file that lists all extensions and describes them.

Testing generated code in-depth

https://github.com/asyncapi/modelina/issues/612

Modelina is aiming to be the defacto standard for generating data models, whether it is to generate them in templates or somewhere else. \r\n\r\nImagine Modelina has to support +50 different inputs, +50 different outputs, with +100 different configurations in each output language. How do we ensure that we always produce the expected data models and they work at runtime?\r\n\r\nCurrently, we only test that we generate expected output through snapshot matching and that they can compile/transpile. None of which ensures that the generated output actually works.\r\n\r\nFor example, say we generate Java data models that overwrite the hashcode method, we want to ensure that it does exactly what it is supposed to do at runtime. And that is that one instance of a model should match against another instance when they are created with the same values. This means we need to be able to write tests such as the following:\r\n\r\njava\r\n@Test\r\npublic void testHashcode_Symmetric() {\r\n Person x = new Person(\"Foo Bar\"); // equals and hashCode check name field value\r\n Person y = new Person(\"Foo Bar\");\r\n Assert.assertTrue(x.hashCode() == y.hashCode());\r\n}\r\n@Test\r\npublic void testHashcode_NonSymmetric() {\r\n Person x = new Person(\"Foo Bar\");\r\n Person y = new Person(\"Foo\");\r\n Assert.assertFalse(x.hashCode() == y.hashCode());\r\n}\r\n\r\n\r\nSo we need to figure out how we from TS can run and potentially generate tests in all of the output languages 🤯 But don't worry, we start small and keep iterating.\r\n\r\n### GSOC status\r\n\r\nDifficulty: HARD (MEDIUM, if you are already well established in the output languages)\r\n\r\nWhat you will learn:\r\n- What Modelina is and how it works.\r\n - This includes the basics of AsyncAPI, JSON Schema, and OpenAPI. \r\n- GitHub CI and workflows used by Modelina.\r\n- How to work with code generation.\r\n- How you can test code generation in other projects such as code templates (same principles apply).\r\n- How to create basic projects and tests in Java\r\n- How to create basic projects and tests in C#\r\n- How to create basic projects and tests in Go\r\n\r\nRequirements:\r\n- You should have basic knowledge of TypeScript as this is the language Modelina is written in.\r\n- Hunger for working in multiple languages and with code generation.\r\n - To solve this issue you need to like working in other languages since the tests will be written in the output language and not TS. You don't need to know the language, but be willing to learn!\r\n

Create New page for /tools/

https://github.com/asyncapi/website/issues/383

Latest description for the issue -> - list can still be enhanced manually as some tools are not on GitHub, there are some on GitLab\r\n\r\n#### Reason/Context\r\n\r\n- Why we need this improvement?\r\nCreating new index page for tools page.\r\n\r\n- How will this change help?\r\nWe can say what each and every tool does. \r\n\r\n- What is the motivation? \r\nWe are moving pages related to tools under /tools/* so a dedicated index page for tools alone\r\n\r\n#### Description\r\n\r\n- What changes have to be introduced?\r\nA new page. \r\n\r\n- Will this be a breaking change?\r\nNew page; mostly changes are restricted to new page. If broken it wll be mostly this page.\r\n\r\n- How could it be implemented/designed?\r\n

Add support for retries, backpressure, and at-most-once, at-least-once, and exactly-once semantics

https://github.com/asyncapi/glee/issues/27

There are many points in the existing codebase that can lead to lost messages.\r\n\r\n#### Description\r\nWe should review and identify which parts of the codebase can cause problems and fix them. We should be looking for:\r\n\r\n1. Retries. What if calling a function fails? Especially when we implement the HTTP runtime. We should be retrying and letting the user configure how to do it.\r\n2. Backpressure. What happens if a client/broker starts sending too many messages too quickly? We should be able to control the number of messages per second we want to receive, telling the client/broker to slow down. This is not always possible but definitely should be implemented for those protocols that provide a mechanism for that.\r\n3. At-most-once, at-least-once, and exactly-once semantics. We should be able to guarantee that a message is going to be received either at most once, at least once, or exactly once. Some protocols don't have a mechanism for that so we should focus on those that have it.

Add support for an HTTP/FaaS runtime

https://github.com/asyncapi/glee/issues/25

We're currently hosting the functions within the same repository but it could be useful to have them as serverless functions.\r\n\r\n\r\n#### Description\r\nImplement an HTTP runtime that will make an HTTP call to a FaaS (Function as a Service) provider and waits for the HTTP response.\r\n\r\nThe HTTP response may vary from one provider to another. From the top of my head, we should support AWS Lambda, Netlify functions, Vercel, Azure Functions, GCP Cloud Functions.

Automate listing of members of technical steering committee

https://github.com/asyncapi/.github/issues/47

most up to date description -> https://github.com/asyncapi/.github/issues/47#issuecomment-1046222917\r\n\r\n

Our open governance model introduces a TSC that consists of all the CODEOWNERS that want to use their right to have a vote in TSC decisions making process.\r\n\r\nWe need a bot/github action that will read VOTERS files from all repos, maintain single list, and put it on the website\r\n\r\n

Description\r\n\r\n- get a github action that reacts on any push to master and checkes if voters file was edited. Then reads it and add/remove/modify a voter in the list stored on the website\r\n- get a github action that on a PR level validates modification in VOTERS file and blocks PR in case VOTERS cannot be added to TSC list as they are affiliated with the company that already reached the limit of representation\r\n- decide on structure of VOTERS file\r\n- get a mechanism that collects more details about TSC members (social accounts, hire availability, etc)"

derberg commented 2 years ago

not needed, we were not accepted

rukundob451 commented 1 year ago

@derberg can these be available for mentorship 2023?