The-Spooky-Frogs / LenelS2Dashboard

Repo for LenelS2 Dashboard Senior Project
3 stars 0 forks source link

Story: Investigate Possible Technological Solutions #18

Open sah7829 opened 1 year ago

sah7829 commented 1 year ago

Investigate Possible Technological Solutions

As an architect, I want Investigate a possible techstack for our solution, so that team members can research those technologies and stakeholders can provide feedback.

Associated Epic: Requirements - project setup

Definition of Done

Acceptance Criteria

AndreasLc1103 commented 1 year ago

Image

AndreasLc1103 commented 1 year ago

Today, I started focusing on what I needed to look into in terms of technologies. This graphic helped me above through researching and looking into how the operability of certain services perform and how granular we will need to be to better adhere to the requirements, so this got me thinking to specifically go back for some work this week going into this weekend try and draft up a list of requirements to help with understanding what we are trying to achieve. I researched the elk stack to better understand some of the opensource tools we could use to build out or at least host a data-ingestion service to build that connector between Graphana and the services we will likely be communicating with.

AndreasLc1103 commented 1 year ago

I'm moving this story back into the backlog to get a better idea of the specific ASRs that would help drive this design. I feel like right now I have a general sense of what we are looking for but I want to come up with a more defined list of requirements then return to this story.

AndreasLc1103 commented 1 year ago

After continuing research into this, I think fundamentally this would be a good idea to rob some of the elk stack

AndreasLc1103 commented 1 year ago

Finalization of tools and technological solutions

Looking into this story, I think I finally made substantial progress while working with @sah7829 and Emily(sponsor) to define the scope of the project a bit more. After having this conversation there seemed to be an agreement of how this SPOG should operate. This project for it's iteration should be something that would be "cloud ready", thus meaning for the time being we will be pursuing constructing the system in a way that allows for the ability to not only use low cost open source software as a means of collecting an processing data but we will be able to run it locally until it is ready to be made cloud accessible. This allowed us to narrow down some tools that would allow us to be able to interact and handle data from one end of the data pipeline to the other, utilizing inspiration from the ELK stack, we could leverage their open source tools like Elastic Search and Log-stash. This data pipeline would be supported in a docker contained structure to allow for the control of network connectivity, while also allowing for the ability to handle and horizontally scale resources if needed.

Elk Stack

Tools that were introduced

  1. Elastic Search
  2. Log-stash

Unknowns Found.

After investigating how we would be interacting with log-stash and elastic search we still want to find a method that would help us store the metric data over time for easy look up. this will allow us to meet the requirements of the sponsors that include needing to maintain persistent data storage for metrics overtime. With this in mind, in the next story #16 I will also be including a breakdown of possible db structures that we could potentially use to satisfy this requirement. These tools will maintain a

Possible options include

AndreasLc1103 commented 12 months ago

After proposing an initial structure with the team the approach seems to be more promising. Utilizing a database that can interface and store data coming out of Elastic Search and Log stash seems to be a definite approach to allow us to have the ability to constantly update data throughout the pipeline. With this consideration, I think it's plausible to say that MongoDB Enterprise would probably be the course of action to meet the architectural and requirements of the project. In this investigation, I found that there is a connector built into Mongo that deals with keeping documents stored up to date after data is provided from the output plugins that are provided with the Log-stash and Elastic Search.

sources including this information are

With that research complete I deem this story COMPLETE 🍾