Closed monperrus closed 4 years ago
This was a very interesting session spanning both testing & DevOps (panelists Julien Bisconti, Jens Tinglev):
Property-based Testing in Java by Henry Luong and Saha Parumitas: PASS
+
originality topic+
discussed an rare academic paper and technology-
don't talk clearly, too fast, no engagement-
no illustration-
no take-home slide-
problem with timingA/B Testing by Adam Hasselberg and Aigars Tumanis PASS
+
very good presentation, on time+
clear and good references +
very good illustrations-
not enough technicalWriting Testable Code by Kristian Alvarez Jörgensen Presentation PASS
+
clear+
good examples-
too simplistic assumptions (DevOps is about having many dependencies)-
not original, link to DevOps is weakOpen Container Initiative by Anders Sjöbom and Valtteri Lehtinen PASS
+
informative, good references+
relevance to DevOps+
good timing-
not enough technical, not enough applicable knowledge-
not enough structureContainerized Applications by Felix Eder and Erik Johansson PASS
+
clear presentation+
high relevance-
not enough technical-
too many gifs-
too long on the intro / timing issueDocker Swarm V.S. Kubernetes by Gustaf Gunér and Jakob Holm PASS WITH DISTINCTION
+
extremely relevant to DevOps+
very good summary in 9 minutes+
professional, objective viewpoint on the topic-
monospace font-
monotonous presentationEvent Sourcing by Hannes Rabo and Julius Celik* PASS WITH DISTINCTION
+
clarity+
prepared and rehearsed, with humor+
good timing+
mastery of the topic-
relation to DevOps somehow shallowPanelists: Laurent Ploix (Spotify), Vincent Massol (XWiki)
Infrastructure as Code by Adibbin Haider and Moa Nyman FAIL (rule for repetition)
+
relevant to DevOps+
clean slides-
too narrow understanding of the context and the alternative solutions-
lack of a concrete, technical example-
no illustrationDistributed Tracing by Axel Larusson and Arnthor Jonsson PASS WITH DISTINCTION
+
excellent slides with references, incl. academic ones+
perfect timing+
inspiring, triggered a good discussion+
good illustrations and examples-
lack of vision on the potential negative feedback loopConfiguration Management Tools by Duarte Galvão and Patric Ridell PASS
+
relevant for DevOps+
good examples with code-
not funContinuous Integration with CircleCI by Felix Kollin and Miguel Müller PASS
+
demo+
good slides, good talk-
superficial answers to questions (Missing more in-depth comparison with other CI tools such as Travis/Jenkins/Youtrack/Bamboo/etc)Git Branching Strategies by Toni Karppi and Anders Fredrik Åhs PASS
+
very strong answers to questions, with deep knowledge of the topic +
well structured presentation, yet a little bit imbalanced between the two strategies-
not original, loosely related to DevOpsFinal Rank | Group Name | Coverage | Execution Time | Effectiveness*** |
---|---|---|---|---|
1* | adamhas-asjobom | 75.20% | 01:00:00** | 0.956 |
2* | hrabo-pornell | 70.70% | 00:01:44 | 31.096 |
3* | tonik-krijor | 67.00% | 00:00:36 | 85.139 |
4* | fkollin-miguelmu | 67.00% | 00:05:48 | 8.810 |
5 | ajjo-axellaru | 66.80% | 00:00:20 | 152.850 |
6 | harisa-lukassz | 65.00% | 01:00:00** | 0.826 |
7 | jcelik-pridell | 63.70% | 00:01:19 | 36.861 |
8 | luttu | 60.50% | 00:17:14 | 2.676 |
9 | egood | 59.30% | 00:00:24 | 113.000 |
10 | oscarros | 57.00% | 00:03:05 | 14.086 |
11 | aeri3-egedda | 52.40% | 00:13:53 | 2.876 |
*
These groups passed with distinction.
**
The original submissions ran out of our 1-hour execution time budget, so we limited them to run for only 1 hour.
***
Effectiveness definition: Covered line per second.
For those who participated in the competition, you would find a detailed result log in the contributions/competition/group-name
folder.
Load balancing with microservices by Haris Adzemovic and Anders Eriksson. DISTINCTION
+
perfect timing+
super relevant+
good understanding and clear illustrations+
good balance between the speakers-
no take-home messagePeople, process, cultural aspects of DevOps The good, the bad and the ugly, experiences from the industry by Anastasia Protopapa. PASS
+
critical thinking on DevOps+
elaborate introspection based on two real experiences-
could be more fun and inspiringChaos Engineering and Gremlin by Emil Gedda and Lukas Szerszen. PASS
+
originality+
highly relevant-
professional attitude can be improvedCoverage-guided fuzzing by Gábor Nagy. DISTINCTION
+
strong will to contribute to an important aspect of the course +
clarity+
deeply technicalServerless cloud computing by Joakim Croona and Philip Strömberg PASS
+
deeply technical+
well-structured, nice slides-
lack of a clear definition of the core conceptGit at Scale - The World's Largest Repository by Louis Nathan and Nicole Carter FAIL
+
interesting and well-sourced stats about real-world Git usage-
reading the presentation-
limited originality-
no take-awayDocker vs. Vagrant by Sara Ersson, Emma Good and Fredrik Norrman. PASS
+
relevant to DevOps+
good comparison-
a little shallowAbout the essay, there were 30 submissions:
DevOps vs Agile is distinguished because it provides a clear, well structured and elaborate comparison between the two approaches that are at the core of the essay.
Immutability is distinguished because of the Medium-like ease of read and very good illustrations.
Pipelines - A better approach to automated build jobs? is distinguished because the comparison is comprehensive and well-structured around the key aspects. The take-away is very clear, esp thanks to the table in the conclusion. The typesetting is impeccable.
An Introduction to Graph Databases is distinguished because of its outstanding technical depth.
A/B Testing - A Search Based Approach is distinguished because its structure, illustration, clarity are remarkable. It’s sourced in the most recent industry and academic work.
About the open-task, there were 16 submissions:
The open-task Lanterne Rouge - DevOps Practice on Board Game Development is distinguished because of the systematic usage of DevOps tools like CI/CD pipeline, Kubernetes, Helm and Let's encrypt.
The task about Fuzzing of JSON Parsing Libraries w/ Open-Source Contributions is distinguished because of its remarkable technical contribution and impact on real-world project.
The open-task Review Collector is distinguished because its stand-alone open repo with live deployment, which follows the best practices of succesful open source software.
@monperrus Have I missed a change of grading criteria?
About the open-task, there were 16 submissions:
* 3/16 pass with distinction * 11/16 pass * 2/16 fail (feedback was sent over email for the repetition)
Thanks @netrounds-erik, this is correct. All 14 open tasks will get a P+ which will count in the final grade:
4 Pass means a final E, 3 Pass / 1 Distinction means a final D, 2 Pass / 2 Distinction means a final C, 1 Pass / 3 Distinction means a final B, 4 Distinction means a final A
@monperrus In the first lecture I believe it was mentioned that the winners of the fuzzing challenge would receive not one, but two distinction points. I can't find that rule anywhere in the repo. Does it still hold?
If it is not written, it does not hold. Transparency and stability of the rules are important.
The two distinctions you have are: the symbolic one, the fame, the most important; and the Ladok one, the grade, less important. Congrats again.
Is there an ETA for the Demo-grade?
Is there an ETA for the Demo-grade?
Hi @lobax , thanks for reminding. It is done, I will announce the result before next Monday. All of the demos are passed though some groups repeated the task, 4 out of them got P+.
About the demo, there were 22 submissions:
The demo Dynamic Jenkins build agents using AWS is distinguished because it is very technical and practical, the presentation is very good as well.
The demo App deployment with Dokku and DigitalOcean is distinguished because the content is technically challenging and well presented. There is an interesting easter egg. The group answersed questions very well on site.
The demo Automate iOS development workflow is distinguished because the topic is relevant and hard. The content is original and shows deep understanding of the knowledge.
The demo Automatic Static Site Redeploys is distinguished because the demo contains lot of work and it is original, relevant and hard.
When will the results be reported in Ladok?
It is planned for this week. Best regards, --Martin
Results of presentations for Week 2 (March 25 2019)
This was a very good session, with a remarkable conversation after each presentation. Here are the results (panelists @bbaudry, @monperrus):
Comparison of code coverage measures by Kai Böhrnsen and Boran Sahindal: PASS
+
be the very first to present+
key concept well claimed: code coverage != bug finding+
wide spectrum of coverage criteria-
engagement-
no take-home slideFlaky testing by Fredrik Flovén and Filip Jansson: PASS WITH DISTINCTION
+
originality and timeliness of the topic (outstanding)+
based on scientific literature (outstanding)+
good structure, good slides (outstanding)+
talk clearly and loudly-
does not fit in 7-9 minutesAutomatic test generator EVOSUITE by Jespe Larsson and Benjamin Tellström: PASS
+
theaterization+
effort to engage with fun-
presentation not well structured-
no illustrationAre automatically generated test suites "good"? by Simon Larsén and Philippa Örnell PASS WITH DISTINCTION
+
good timing (outstanding in this round)+
critical thinking (outstanding)+
super well structured (outstanding)+
list of references+
really good example & listing (outstanding)-
no illustrationCross Browser Testing, Selenium Testing, and Mobile Testing by Kartik Kandavel Mudaliar and Yi-Pei Tu PASS
+
demo+
motivation about testing for stakeholders-
structure-
no take-home slide-
typos in slidesAutomated Cross Browser Testing by Simon Jäger PASS
+
good timing (outstanding in this round)+
good motivation about the difficulty of web testing+
deep and technical, very good listings (outstanding)-
a bit of advertisement tone about SauceLabs-
presentation alone, no collaboration