Open kanishkdudeja opened 6 years ago
I've been thinking about this @mgill25 and this feels like a reasonable way to approach it. Curious to hear your thoughts.
When the program starts, the Main function runs in it's own Goroutine (this is by default)
We run a Goroutine named ReadLogFile
. This will be responsible for reading logs from a log file, validate if the log line matches the configured log format, includes/excludes log lines as per configured regex patterns and then sends structs of type LogEntry
over a channel to Goroutines responsible for replaying requests.
We run num-parallel-requests
number of Goroutines (name could be ReplayRequests
) for replaying requests. The ReadLogFile
goroutine will send structs of type LogEntry
over a channel to these Goroutines (in a round robin fashion) when they are ready to be replayed. These Goroutines will then replay these requests and send result of each replayed request in a struct of type LogReplayResult
to another Goroutine responsible for printing out the error log file and for printing out stats.
We run a Goroutine named ReplayResultsAggregator
which will get structs of type LogRelayResult
over a channel from the Goroutines responsible for replaying requests. This Goroutine will increment stats which are displayed on the screen and will print out errors and failed log entries in a file.
Not a bad way to approach it. I'll start a branch and start work on it tonight. :-)
We should use probably add support for parallelism in replaying requests.
If there is sufficient networking capacity available on the clients' (the machine which will replay the requests) end and the servers' end (the infrastructure which will receive these requests), we should use Go's inbuilt features like Goroutines and channels to replay requests faster.
The number of parallel requests our program will fire should be a configurable parameter.
We should probably name it as:
num-parallel-requests