Closed Misax148 closed 3 months ago
Maybe it would be useful to use a LIFO queue to process those frames
Maybe it would be useful to use a LIFO queue to process those frames
I will try it
I was thinking that the logic of that component could be decoupled because it seems socket server is doing more than being a socket.
I was thinking that the logic of that component could be decoupled because it seems socket server is doing more than being a socket.
I have implemented such as you recommended, but the only diference is that, i'm not implemented de LIFO algothim, instead of that I implemented FIFO, that is because with LIFO some Frames are lost during the deserialization and on the reprentation of Nannou is visible when this frame is lost
The client socket has been implemented using the flatbuffesr library.
The IPC between screen and bridge is performance critical, and the current serde implementation is slow.
In the other hand we need to have a communication protocol between these two containers, that is also implemented using flatbuffers, which also has the generated files
I've made a spike regarding this implementation spike-xx/serialization about how to pack into a flatbuffer.
I've used this tutorial to create the spike.
I was thinking that the logic of that component could be decoupled because it seems socket server is doing more than being a socket.
I have implemented such as you recommended, but the only diference is that, i'm not implemented de LIFO algothim, instead of that I implemented FIFO, that is because with LIFO some Frames are lost during the deserialization and on the reprentation of Nannou is visible when this frame is lost
Yep, you're right, the correct Data Structure would be FIFO
I have created a UML based on the current relationship-
I must say that I find this relationship odd, AudioManager is meant to handle audio and Socket Server should provide it with the data. What do you think @codeFactory12?
I have created a UML based on the current relationship-
I must say that I find this relationship odd, AudioManager is meant to handle audio and Socket Server should provide it with the data. What do you think @codeFactory12?
Yes it does but needs to be called the methods of AudioManager manage all the logic to reproduce sounds bassed on a existent path, but needs to be called somewhere so can be able to be used
US Related
Description
In this task, the server socket was implemented that will be waiting for the frames that the client socket will send from the console.
What did I do?
Frame
that I will receive from the client.Frame
to the necessary resources needed by the ui and sound in nannou.How did I do it?
serde
library for deserialization.threadpool
library to handle sound separately to avoid visual loss.Why did I do it?
Notes
You need to add a file
.env
on the root of screen with the following information:SCREEN_SOCKET_SERVER=/tmp/socket_console
Type of Change
Put an
x
in all the boxes that apply:Pull Request Checklist
Put an
x
in all the boxes that apply: