EnterpriseQualityCoding / FizzBuzzEnterpriseEdition

FizzBuzz Enterprise Edition is a no-nonsense implementation of FizzBuzz made by serious businessmen for serious business purposes.
21.64k stars 763 forks source link

Split the monolith #265

Open TiBeN opened 8 years ago

TiBeN commented 8 years ago

There should be one micro service for Loop, one for Strategies, and one for output. All services should communicate through SOAP because it works better with JAVA.

battlesnake commented 8 years ago

But those SOAP messages should all go through an adapter layer which passes the actual messages as JSON though

TiBeN commented 8 years ago

Its a good idea but this is way too much restrictive. It could be more appropriate to use an AbstractAdapter if we would like to change the format to XML later.

dbuschman7 commented 8 years ago

@battlesnake Only if the JSON is escaped first and then wrapped in the XML Envelope. :)

battlesnake commented 8 years ago

And the XML is parsed using a Regular Expression?

dbuschman7 commented 8 years ago

@battlesnake Custom ANTLR grammar. :)

sander-bol commented 8 years ago

This solution sounds like a good attempt at achieving business-IT alignment, but is lacking severely in maintainability. Recommend implementing ESB such as Synapse to achieve strong decoupling.

battlesnake commented 8 years ago

We need logging to a Mongo database too, because Mongo is web-scale

charliemitchell commented 7 years ago

Obviously using Elastic Search is a better option for logging. But it needs to stored in mongo and transmitted to the logging service via Rabbit MQ, then the elastic service will pick it up and store it in Elastic Search for easy use with Log Stash / Kibana.

Qix- commented 7 years ago

Current best practices suggest OLAP databases for logging.

Dear god, could you imagine?

asha23 commented 7 years ago

I think the best approach would be to build it using old school .asp and a flat file database (Preferably a .txt file)

I need it to work on my Amiga.

davidjeddy commented 7 years ago

All excellent ideas, however how will this perform while running on my Commodore 64?

battlesnake commented 7 years ago

Who cares about insignificant targets like C64 and x86? Enterprise software only cares about Itanium!

TiBeN commented 7 years ago

I truly disagree. Being able to support legacy software is of top most importance. For this use case i suggest to setup a replicated c64 virtual machine on a cluster of Itanium servers and an ESB to perform its integration with the enterprise IT system.

Windowsfreak commented 7 years ago

I'm sorry, but unless we use a distributed database like Cassandra to allow each of the MicroService flavours' multiple docker nodes that were launched by kubernetes to value integrity over idempotency when making calls to other subsystems, I won't buy this. Just remember to use RX Java to account for the pipelining so they don't produce too much overhead in spawning new workers waiting for their response.

Also, if you're new to MicroServices, we'll need a bff, some sort of JWT cookies, a pre-authorization hook, some sort of health check attached using LogStash or any other automated tool to provide live statistics for the monitoring. Did I forget to mention that for integration tests we could fall back to much simpler CDC's to prevent spawning up all the container images at once?

swarkentin commented 6 years ago

This Issue should be closed, because it doesn't scale.

Skaldebane commented 3 years ago

Well if we want it to scale we might want to try Kotlin, but that's too childish a language to use in an Enterprise context.

Qix- commented 3 years ago

If we are going down the language rabbit hole, why not rewrite everything in Go?

It's incredibly enterprise grade, even Google uses it!

nicklatch commented 3 months ago

I say we split the interface and impls of the current system and then deploy each side to its own k8s cluster which will in turn run inside a parent cluster. This solution will not need horizontal scaling as all clusters are housed within a parent cluster. This will drop the costs of scaling extra EC2 instances and all we have to do is run an instance of the 448cpu 12tb ram machine. DevOps will be contacting Leads soon to set up a meeting to plan a meeting about dev and QA environment solutions.

Windowsfreak commented 2 months ago

all we have to do is run an instance of the 448cpu 12tb ram machine.

@nicklatch Your suggestion to utilize a 448 CPU, 12TB RAM machine is certainly ambitious and could potentially address some high-demand workloads. However, from a risk management and availability perspective, relying on a single machine, even of this caliber, introduces a significant single point of failure.

To align with best practices for high availability and fault tolerance, I recommend deploying multiple machines across different Availability Zones. This approach not only mitigates the risk of downtime but also ensures continuity of service in the event of hardware or infrastructure issues. Additionally, by distributing the load, we can optimize resource utilization and potentially reduce costs by leveraging more appropriately sized instances.

Before we move along, we should discuss this further and explore the architectural adjustments needed to support such a deployment in a way that balances performance, cost, and reliability.

Qix- commented 2 months ago

I agree with @Windowsfreak here.

Not to mention that using a single machine for extensive and high-volume concurrent fizzbuzzing would introduce incredible amounts of cache thrashing. More CPUs = more TLBs = less cache thrashing, as we all know. Even affining the fizzbuzz workers to cores and disallowing any additional scheduling would prevent preemptions and context switching, which would be ideal. The workload is primarily compute and I/O bound, not memory bound, so I'm not convinced the vertical scaling with such large amounts of RAM would be the best usecase to optimize for. More CPUs with more network cards is the obvious winner here, to me at least.

Plus RAM is expensive over compute instances, small businesses that will need to deploy FBEE at the edge will need to maintain costs in the long run, and I've never heard of any small startups getting unfathomable amounts of money to just spend however they please without any recourse before. Certainly not common.

sander-bol commented 2 months ago

TBH, I think we've all been missing the point here. FizzBuzzing may have started out as a small niche element of modern computing, but has now proven itself as a foundational building block of modern string-and-number decision trees (see: Exploration of SAD-TREE Implementations, Corwin et al, 2023). Looking at recent trends like cryptocurrency and LLM's, it's obvious we will soon see the first FizzBuzzing ASICs hit the market. This will require a far more tightly integrated control loop, based on a highly standardised FizZBuzzing native compute primitives.

As microservices are a solution to an organisational problem, not a technical problem, I therefor suggest putting a pin in the last 8 years of architecture debate. We should instead zoom back in on the basics: Using FPGA's like AWS F1 instances, we should be able to run a quick proof-of-concept prototype in 4 to 6 months to validate whether offloading the core FizzBuzzing algorithm to dedicated silicon makes sense for our use case. Using this custom hardware, we should be able to spin up a microkernel that can handle the workload. Of course, once that work is done, we can always wrap these high-performance primitives into a cloud-agnostic service layer - but doing so before we have the necessary compute in-place is probably akin to putting new tires on an old Rust-bucket.

Windowsfreak commented 2 months ago

@sander-bol I appreciate the forward-thinking approach of considering FPGA implementations for the FizzBuzz algorithm, especially given the potential for optimized performance at a hardware level. However, I have some concerns about this direction.

Transitioning to an FPGA-based solution would necessitate a significant architectural shift, moving away from our current, more flexible design. This shift could result in highly specialized, optimized code that is tightly coupled with specific hardware, thereby compromising our goal of maintaining a system-agnostic architecture. In the context of recent developments—such as the deprecation of certain Terraform licensing models and the abrupt discontinuation of critical projects by major search engine vendors—relying on proprietary hardware could expose us to unnecessary risks.

While the potential performance gains are enticing, we must weigh them against the long-term implications of vendor lock-in and reduced system flexibility. Additionally, while FPGA implementations could indeed reduce cycle counts and enhance execution speed, we should also consider the complexity and resource investment required to refactor and maintain such a solution.

It might be prudent to explore less invasive optimizations that align with our existing architecture and design goals before committing to a path that could introduce significant dependencies and technical debt.

I'd say: Let's instead focus on alternative approaches that balance performance with architectural integrity.