skoczen / will

Will is a simple, beautiful-to-code bot for slack, hipchat, and a whole lot more.
https://heywill.io
MIT License
406 stars 171 forks source link

Future of Will #395

Closed Ashex closed 3 years ago

Ashex commented 5 years ago

Development of Will has slowed quite a bit and there are a large number of pull requests pending that have not been reviewed by the maintainers.

I'm creating this issue to at least start a dialogue about the future of Will. Presently it looks like @skoczen is the only owner although I did notice that @wontonst Made a contribution which they merged themselves.

What the state of owners/collaborators? Can we get more participation from @skoczen to get pull requests merged in and to offer developer support to resolve bugs/issues we've identified?

If the project is abandoned/semi-abandoned then it may be time to fork it and make sure that more than one person has merge access (we'll need to establish a collaboration/contribution guide of course) so that we can continue developing will.

CC people who have submitted multiple PRs in the past year: @pastorhudson @dawn-minion @chalta

wontonst commented 5 years ago

Thanks for reaching out. The development of Will on our side has definitely slowed down a bit. The last time I spoke with @skoczen, he could not find the bandwidth to actively develop will. That said, he is 100% behind the idea of transferring more ownership and control to the community. We just haven't figure out what that would look like yet. If anyone is interested in picking up more responsibilities definitely reach out!

I think one of the main reasons we struggle to get PRs merged in is we don't have adequate unit test coverage (right now we're at 13%) and the cycle time for manual testing is substantial. I would love to merge some of those PRs in but there is a real risk of introducing breaking changes.

I also admit, I am not an expert in the will codebase. The naive fixes all sound good to me but if we merge them all, we could potentially be loading ourselves up with tech debt.

rsalmond commented 5 years ago

I've been using Will as chatops tool of choice at 3 jobs now and am currently ramping up the newest job with lots of Will automation. I would be happy to help keep will alive and humming!

chillipeper commented 5 years ago

hey there I have been using will as well for a long period (over 3 years), and it has helped me to learn a lot in python. I will be very happy to help keep will alive. Also did some minor contributions to the V1 bot.

@wontonst @skoczen can you please share a list of issues or tasks that need attention, so people interested could take a look and maybe help out?

I am will be happy to work on something but I have no idea where to begin, and I think that's the case for many

rsalmond commented 5 years ago

Aight well there's at least one bug I need fixed for work but I'm gonna work on leaving added unit tests in my wake, try to get coverage up. In case folks are thinking a stand alone org might be worth pursuing it will come as no surprise to anyone that github.com/will already exists. If that's on folks minds we'll have to come up with a good name.

pastorhudson commented 5 years ago

I'm currently maxed out on other things, but we still use PCOBOT for day to day automatons. And I will from time to time be doing sprints to add/update features. I would like to continue contributing.

Perhaps someone could make a big list of tests and then we could all start plugging away at them to get coverage up?

wontonst commented 5 years ago

I think a good starting point is some kind of testing infrastructure that simulate various chat application inputs to will, eg received message in a channel, private message, retrieve list of users, etc. Then we can orchestrate more complex test scenarios that can cover some of the errors that we are getting.

In the meantime, I've categorized all open issues.

rsalmond commented 5 years ago

@wontonst you talking about a full integration test? Like actually connect to slack / rockectchat and push events?

Also, question for the thread, since Hipchat is EOL is there another platform outside of Slack and Rocketchat that we care about at this point or can we just focus on yanking all the hipchat pieces out of Will and making those two platforms really solid for the time being?

Ashex commented 5 years ago

It would be great if we can create a "roadmap" for the types of unit tests we'd like to implement and work on writing those out. We should be able to cover the majority of the core tests using mock calls since we're consuming api responses and acting on them. We would need to develop tests on a per backend basis.

The current challenge from my perspective is that it's not 100% clear how the interaction within the various systems is done which poses a challenge for developing full integration tests since we need to identify what we need to cover.

rsalmond commented 5 years ago

By various systems do you mean the components of will or the chat backends? If it's the latter my read on the rocketchat and slack io backends is that there's just basic API calls using the requests library. Should be easy to mock.

Ashex commented 5 years ago

I was referring to both actually :) I agree that we can mock the API calls with the chat backends. We've just got all the different parts of the will backends (pubsub, generation, storage, etc.) that we need to develop tests for which requires a working knowledge of how each of those are supposed to behave/interact with each other.

Fortunately there's documentation on how to implement each of these backends which gives us a starting point.

chillipeper commented 5 years ago

@rsalmond I don't think there is any other platform we care about at this point other than Slack and Rocketchat. I am all in for removing all Hipchat pieces, and concentrate on the other ones.

As for the tests I have not much experience so wouldn't be able to start from scratch, but if I see a roadmap or have a starting point, I am pretty sure I will be able to contribute a bit :)

pastorhudson commented 5 years ago

@rsalmond I think hangouts, Microsoft teams, and perhaps messenger or whatsapp would be nice. The more platforms we support the more likely we are to get contributions.

rsalmond commented 5 years ago

Aight so it sounds like for the immediate future our priorities are:

Mid term goals:

Future plans:

Seem about right?

WRT adding backends, or even the upkeep of slack and rocketchat for that matter (eg. #393), it always struck me as odd that Will implements the API calls directly rather than consuming a reliable external library like slackclient. Since we're talking about future backends, any strong appetite for going DIY as opposed to trying to let other folks build a test that plumbing?

Edit: adding @pastorhudson's suggestion about slack Event API.

pastorhudson commented 5 years ago

I'd love to support for the slack Events api

chillipeper commented 5 years ago

Cool, that sounds like a good plan and starting point.

I guess the idea is to keep using unittest as will's testing framework? Any opinions about pytest? Opening up the subject, since I see there are only 2 tests created, which means it would be a good opportunity to leverage which will be the most adequate framework to use. I am fine to use either one of them, but more familiar with pytest.

It would be nice to have tests to make it easy to test plugins, rather than having to do it manually. To me this is a priority (haven't found a way to do it myself :( )

Regarding the use of the slackclient, I do see that it is actually being used on the slackbackend in serveral places. (self.client property uses the library, and rtm is used for receiving the messages). The only calls that are not implemented with the library are the following, which should be easy fixable.

SLACK_SEND_URL = "https://slack.com/api/chat.postMessage"
SLACK_SET_TOPIC_URL = "https://slack.com/api/channels.setTopic"
SLACK_PRIVATE_SET_TOPIC_URL = "https://slack.com/api/groups.setTopic"

I am really curious what are the benefits/cons of using the events api over rtm. I saw somewhere that this suggestion appeared because of the time will was was maxing out on cpu, but if history analisys is disabled in config.py that issue can be avoided. (unless of course someone is actually using that histoy analisys backend). I also understand this might need a different thread, as priorities atm are to have some testing and get rid of hipchat.

Ashex commented 5 years ago

Seconding the use of sdks instead of implementing the calls ourself, it's a hell of a lot easier to incorporate new features when the support is written for us. The slack futures I've submitted have leveraged the slackclient library and I can say it was a hell of a lot easier than dealing with getting the payload to be accepted by the api.

Regarding the EOL of the hipchat backend, we'll need to split out the hipchat mixins used by plugin.py. Apart from that it should (hopefully) be a straightforward process.

I'm personally open to using any unit test framework as long as it's got a strong community behind it. I've only worked with unittest but I'm happy to jump to anything else.

rsalmond commented 5 years ago

Aight sounds like we've got some traction. Maybe we can start thinking about how to break these ideas up into separate issues? I've updated the above bullet list with the pytest suggestions and nominate @chillipeper to port the couple tests we do have over as I have not used pytest either.

"Add Unit Tests" is too big for a single issue IMO. Thoughts on how to break this down? I was thinking perhaps one issue per backend and mixin would get the ball rolling.

EOL hipchat could be done in one issue I think? Though maybe a good idea to wait until we have at least a little more coverage before we go ripping a bunch of plumbing out?

pastorhudson commented 5 years ago

@rsalmond My reason for wanting Slack Events API is that it is on demand rather than constantly listening. Events api will let us tie directly into /commands. It will also let us build bot as service type applications that connect to multiple slack teams rather than just one.

rsalmond commented 5 years ago

cc @chillipeper ^^

chillipeper commented 5 years ago

@pastorhudson I am in favor of adding support for the events API, but don't think dropping support for rtm will be a good idea. The reason for this is that having an API on demand sounds great, but there are some other use cases where you actually need the bot to be listening to what other people say or do, instead of having a user choose to execute this or that command. I personally use this feature for a couple of plugins. I saw somewhere that the concern was the amount of data that will have to process with rtm, but from my point of view, this is more a problem of scalability (and maybe a bug) rather than using the correct/incorrect API. Still, this discussion might be ahead of us at this point if the idea is to have adequate coverage.

so, thanks to the nomination of @rsalmond, and also because I want to show my commitment to revive this project, I have ported (and actually added all coverage) tests for the acl.py module. I have used fixtures and parametrization to show a bit of what pytest can do. I also have to admit that the solution might look a bit over-engineered, I am not an expert in testing, so please apologize and correct me if there is stuff that is not right. Check out #403 if you are curious, I can help clarify any questions ;)

A funny fact is that after writing all those tests, coverage leveled up only 1% 😄 , what I am trying to say with this is that any contribution matters.

@wontonst maybe you could help us out categorize our short term goals as @rsalmond suggested?

cc: @Ashex, @dawn-minion, @Chalta

skoczen commented 5 years ago

Hey all - quick reply with more to come. I'm really excited to see the activity here, and as @wontonst mentioned, I'm pretty maxed out, and would welcome the opportunity to move Will's development, planning, and releases into a broader community.

To that end, I've blocked out time this month to get contributors onboard, set folks up with access to pypi pushes, get all the things out of my head, and help facilitate all of you stepping in and helping to carry Will forward.

As I mentioned, I'll be responding in more detail in probably a couple days (and a bunch more) to get things settled this month, but I wanted to just say thank you to everyone.

Thanks for your patience as I've been unreliable and development and releases have slowed. Thanks for the PRs, CRs, activity, and bug reports. Thanks for caring enough to stick through all that, and for being interested in helping Will move forward! I deeply appreciate all of you, and the community here - being a part of it is one of the things I'm most proud of with Will.

More soon - but most importantly, thank you. You're all awesome. :)

-Steven

pastorhudson commented 5 years ago

@chillipeper I'll admit that using @hear has been a total failure for me. It could be that my regex is just not that great, but I've had to use @respond exclusively.

skoczen commented 5 years ago

Hey everyone,

As promised, another, larger update and brain dump from my side.

In reading through all the recent activity, it's really clear to me that a) this community is awesome, and b) I just need to get out of your way. :)

To that end, my top priorities are 1) Getting contrib access to the github repo set up. 2) Getting release access to pypi set up 3) Answering questions/brain dumping answers to questions people have on why things are done a certain way right now, and my two cents on whether they should stay that way.

Here's some actions on each:

  1. Contrib access @wontonst currently has collab access, and based solely on recent activity, I've added @chillipeper @rsalmond and @Ashex. I am certain that I've missed people who should be added and it's quite possible that one of these four folks don't want to be added (please speak up if so!). But I wanted to do something to get us moving forward.

As things like decision-making and who has time to be active become more clear, we can and should move roles around and change things. Please don't take who's added and not added as anything other than me taking a quick first pass to get myself out of the way. :)

  1. Release access. I'll be adding @wontonst shortly, and he should have access to add people as before. All the same caveats as #1 apply. He didn't ask to be added, and I'm sure it will make sense to add other people. But right now, I'm the bottleneck, and I want to fix that.

  2. Answering questions / Brain dump: See the below. :)

Broad thoughts on why I've kept a tighter control, and why that needs to change:

Before now, I've kept punting on adding people to Will because the automated test suite wasn't good enough, and I knew that people and companies (including some that I worked at) relied on Will for critical operations.

I always wanted to open things up, but I didn't want to risk screwing companies over with a bad release - and as several people in this thread pointed out, manual testing is pretty time consuming.

In retrospect, this was entirely backwards. More hands help us both have better test coverage and can help with testing. I (and we) have trusted this community to build Will - at some point last year, I believe we passed the tipping point where the majority of code in Will was written by the community, and not me.

I've been behind on this, and I had it wrong. Let's get more smart people in, and get me out of the way. :)

Things that matter to me in shifting the community and ownership.

There are a handful of things I'd like to advocate for in Will moving forward without me being so actively involved. They're the things that I think have been important in all of you using Will, and the interest and contributions we've seen over the 6(?!) years he's been around.

A dose of humility: this could all be wrong, and wishful thinking on my part. It's also possible that the community decides to go in a different direction than I've led Will thus far, and that's cool. But they're my two cents. Thanks for hearing them out. :)

1. Be Kind.

It's in the contribution docs at the top, and I think it matters. Too many open source projects are plagued by snippy comments from owners and contributors that turn off people from suggesting features, fixing bugs, and collaborating to make the project better.

I've always tried to put in extra effort in this project to make people feel welcome, assume that I might be wrong, and thank people for reaching out and submitting an issue and PR. I'd love for Will to stay a place where instead of closing PRs with:

"Automated closing because of inactivity."

We do things like:

"Hey, haven't seen a reply to the question above in a few months, and it helps us to keep things clean. Thanks for submitting this, and please reopen it if we have this wrong, or when you have time for it!"

As I'm sure basically everyone here knows through their work, actually writing code is the easiest part of being a developer and building a community. Good communication and culture is what makes great projects thrive, and I'd love to have a culture here that makes people happy, makes new people feel welcome, and keeps us and Will friendly and accessible.

2. Give Credit

Every contribution to Will has gone into:

I think it's really important to give folks as public a credit as possible. It's one of the few tangible rewards we get to contributing to an open-source project, and if it can help out in a future job interview or the like, it's 100% worth doing. I also think it helps show and build a real sense of community.

3. Will is stable

I've preferred stable over speed throughout Will's life, because I know companies are relying on him to ship products, test code, and automate things that affect people's paychecks.

I'll save the long rambles (since I suspect folks here largely agree), but to me, this boils down to:

4. Good documentation, and an easy way to get started.

When Will first started, documentation was awesome and complete, there was a quick start that was just a few steps long, and things Just Worked. The docs have been in a slow decline since then, but I think they're still ok. I'd love for them to be great.

I believe that good, friendly documentation is what separates good open-source projects from great ones, and I'd advocate that it be a priority for Will as the project moves forward.

Specifically, documentation (or a mix of documentation and interactive startup) that covers:

5. Secure. Encrypt by default.

I am not a crypto expert of any kind, and it's very possible there are security holes in Will. However, I'd love for there not to be, and I'd love for people to be able to run Will and trust that the information that's going into his system and being processed is safe from prying eyes.

2.0 added encrypted storage and events by default to that end, and I'd love to see future versions go even further.

6. Take the time to make it simple.

Finally, I've prioritized taking the time to make a simple, intuitive API, even if it means a (much) more complex machine under the hood. Things like .reply() Just Working, @respond_to making reasonable assumptions, and the system doing as much as possible to handle mistakes and make it easy for people to teach Will things.

I've always operated with the view that adding a simple Will plugin should be something even non-technical people on a team could do - and I've been at companies where that was true.

There will always be more power under the hood for developers who really need it, but it's been important to me to keep Will simple and accessible for simple tasks. I'd love to see that continue.

Things I am really happy to get out of the way and let the community drive

1. The future direction of development

Maybe we'll add whatsapp support. Maybe we won't. Maybe we'll lean in to the 2.0 generative-AI cognitive framework. Maybe we won't. Maybe we double down on Chatops. Maybe we double-down on conversational AI. Maybe we drop Hipchat like a bad habit. Actually, let's just do that.

In shifting control over to the community, I'm happy to make my voice just one in the room, and let this group decide what features are important and what direction this project moves in.

2. What platforms are supported

I'm totally agnostic. I tend to hold the view (agreeing with folks above) that more platforms are better. I love the idea of Will as a place to build your business and ops logic that's immune from the incompetence of companies like Atlasssian. Code that's written for Will should be portable and useful across multiple platforms, and I'd love to see the list of places he works expand.

3. The architecture

I'll talk more about how things are below, and I do think the arch that's there is a pretty solid abstraction to build around. But I want to be explicit - if this community of smart people disagree and you want to rebuild Will in a different arch, great. I'm just one person. This group of brains is a whole lot smarter, and if there's a better way, let's do it.

Some big questions.

This is getting pretty long, so I'm just going to put these questions out and hold my thoughts for the moment. I'd love to hear from folks - what are your thoughts on these?

  1. How are decisions made (what to merge/not merge, build/not build)?
  2. When do releases happen?
  3. If we have breaking bugs, what's our process for getting those fixed?

Some technical notes & answers

Here's a few quick answers to some of the questions I've seen floating around. As these bring up other questions, I'm going to prioritize answering those, to get things out of my head, and into the community's space. Thanks for reading all this!

Direct API calls vs external libs:

Largely, when this has happened, it's either that the libraries didn't exist when I wrote it, weren't stable, or they didn't provide the functionality necessary to support what Will needed to run. I'm 100% in favor of pulling in stable, well-tested abstractions for platforms, backends, etc.

Why are things like "history" around?

This comes off of another issue, wondering why the history object was so big and being passed around everywhere. I can write more about this another time, but in the 2.0 rewrite, Will's internals were shifted from being a simple regex-matching machine to a decoupled, iterative loop that you could build a proper conversational AI in.

There's more in the docs, but history is the first analysis backend - you can imagine backends for sentiment analysis, conversation threading, etc.

I can definitely see a different version of history being bundled, or a different implementation that offloads the content and stops passing it around. However, I do think the io -> analysis -> generation -> execution loop is a solid abstraction that can allow Will to grow in some really exciting directions, and be prepared for the world we're likely to be living in 5-10 years down the road.

Testing

One of the things that didn't make it into 2.0 that I'd hoped for was a a test harness that mocks chat/replies. I might still have some code sitting around, but broadly, my thought was to simply make it an io adapter that we could run against, then create tests in either a python lib, or some standardized gherkin/yaml format that folks were happy with.

Will the company

Yeah, this hasn't happened, and isn't currently likely to happen. There's a much longer story there that's maybe for another time, but bottom line - it's not happening now, and I don't see it happening in the foreseeable future.

I would still love to see some of the ideas in there come to life - bundled Will "apps", a marketplace/repo for installing them, skills that enable new connections through clean abstractions (imagine self.skills.google_calendar.add_event('title', my_datetime)).

But those are all dreams, and today, we live in reality. I don't want any of those dreams to get in the way of where this community wants to go.

Releases

There's a semi-fragile fabric release script that works, on my machine. It relies on bash, git, and being in the right directory, and builds and pushes new docs with the release. We should move it out of there. :)

Finally,

Thanks for reading all of this if you did, and again, for your interest, patience, kindness, and brains in helping Will continue to move forward.

I appreciate you.

Tags: @wontonst @rsalmond @Ashex @chillipeper @pastorhudson from this thread,

@netjunki @woohgit @brandonsturgeon @sivy @shadow7412 @derek-adair @pepedocs @mike-love as folks who have been really active in the past, and might want to hop in to this discussion and what's next.

woohgit commented 5 years ago

Hey guys,

The timing was not the best. I slowly moved from Python to Golang. And unfortunately, I've just switched off will at CloudBees because Slack has tiny small apps which is equal to the will plugins.

So it means I won't join you on this new journey but I wish you good luck with that!

Of course, I haven't forgotten python so if there is a need for a helping hand, just ping me via my handle.

All the bests!

brandonsturgeon commented 5 years ago

@skoczen Still happy to help. Just need a place where I can jump in.

Maybe we could use github's kanban board for new features while using Issues for bugs?

pastorhudson commented 5 years ago

How do you guys feel about dropping support for 2.7? I'm all for it, and it might mean fewer tests we have to write.

skoczen commented 5 years ago

@woohgit totally get it, and thanks for all you've done! @brandonsturgeon awesome! @pastorhudson I'm a pretty strong -1 to dropping py2.7. I still use (and prefer) python 2 in lots of places, including a couple Will installs. But beyond me, its foothold in the sciences is real, significant, and not moving anytime soon. I'd (personally) want to have a really good reason to drop support. But again, those are my two cents. :)

rsalmond commented 5 years ago

Some feedback on the braindump.

First, I love the emphasis on community. @skoczen has set a great example (this issue is downright heartwarming) and I intend to follow that lead. Stuff like this makes me wish I'd stopped coding in isolation and taken up github years earlier.

WRT making big decisions I think we can start with just opening issues for big ideas and getting feedback and buy in from the group. :+1:'s and :-1:'s are easy. If somebody has a change in mind, cut an issue describing what you want to do and why and we can knock the idea around and get it formed up into something we're all happy with. Hopefully we don't have anything controversial enough to require a BDFL but that may wind up being you if it comes to it @skoczen. FWIW if I disagree with something that is widely wanted by the community I am definitely going to go with what the group wants. If for whatever reason I desperately hate the idea I will be very happy to maintain my own fork that takes a different path.

Regarding release cadence I feel like if all the tests are passing and nothing major has been altered or deprecated we should cut a patch release as frequently as we have a reason to do so. In my experience more frequent small changes are less problematic than less frequent larger bundles of changes. I'm less clear on what the criteria for a minor release might be but I feel like once we start shipping bigger features we will know it when we see it.

If we have to ship a fix to a breaking bug I suggest we do not merge it until the fix includes a test which verifies the fix works as intended and prevents us from shipping that regression again.

An opinion on how we proceed.

There's lots of interest here which is rad and I love it. Before we go full speed ahead on deprecating legacy stuff and pounding out features I would like it if we could agree on some amount of test coverage to establish before we do anything else. It will make it a lot easier for us to verify each others contributions and give users confidence that our hard work is reliable.

I will start off the bidding by saying that I'd like to see 100% unit test coverage of at least main.py and plugin.py, very interested in y'all folks opinions.

chillipeper commented 5 years ago

I also love the emphasis on the community, I have personally experienced this and @skoczen has provided a great example I will be very happy to follow. To make the story short, the few things I have learned in python are thanks to will. I have no idea where the project will go, or what will happen in the future but one thing I am certain now is that is my turn to give back.

Regarding desitions, I think for the moment it should be kept simple, opening issues sounds like the best approach.

I have not much experience with formal releases, so I am on board on what the community decides. I do think that is more important to do quick nonbreaking releases than waiting long periods to release. If there is a big feature coming up, makes sense to wait, but if we are fixing bugs, or functionalities, adding tests it should be released fast. I think this comes to the fact of how many people actually test this afterwards, and I personally would be much more comfortable deploying in prod a minor release to test than a huge release with lots of changes. Just as an example, even though I knew the existence of will 2.0 for over a year, I started using it only 2 months ago.

I also think that we need to have a good testing base, I am personally very willing to help with this when my time permits.

Reg dropping support for python 2.7, this is not a good idea at the moment. Even though EOL is scheduled for 2020, this does not mean that systems and people will stop using it. Just to give a couple of examples Centos and Mojave are still shipped with py2.7. Not everyone is allowed to install py3 on their systems. Lastly, adding support to it, with proper CI configured to it, should not cause too much overhead. Removing support at this point would mean, wiping out all those 2.7 possible users. I do recognize that this might actually prevent, or delay us on using latest py3 libs which have not been backported ie: asyncio (yes, I dream of an asyncio will 😄)

There are two very important questions IMO, that have been with me already for a long time, and I can't get around them, or have any idea how to approach them. I would love to have @skoczen opinion on these:

1.- Scalability, what are your thoughts? At the moment, the only way we can scale will is vertically, but I would love to see will scaling horizontally. Will the current architecture, design support an implementation like this? I know that we are using pub/sub broker and web sockets (at least for slack). So if this can be implemented, which will be your suggestions?

2.- Test Plugins, I would love to have a brain dump or a suggestion on how would be the easiest approach for people to test their plugins. I think this is a very important feature we need to have, it should not be only easier to create plugins, but it should be easy to test them.

derek-adair commented 5 years ago

+1 for not dropping py2 +1 for sane version releases

Open Source is awesome. Open source release cycles are often NOT awesome.

Highly encourage anyone looking to get into contributing to check out semantic versioning. This is very relevant for anyone who consumes a semantically versioned project as well.

Also fowlers refactoring is relevant.

skoczen commented 5 years ago

Quick thoughts: Largely agree with everything laid out by @rsalmond , happy to give opinions on controversial things, though I suspect it won't happen as folks here will reach consensus. +1 on cadence, bug fix process, and priorities.

Answers to @chillipeper 's questions:

  1. Scale - Will should scale both horizontally and vertically- that was a big part of the 2.0 architecture. There's an event queue and event handlers, all both pluggable and decoupled. There's zero reason that the handlers need to be running on the same hardware as the queue, the IO code, or each other. Back when I was planning Will the Company, I had those spec'd out as separate instances, running at scale, where the handlers themselves didn't have to have any knowledge of the Will instance they were handling (which is why things like the history object are passed around with the event.) TL;DR - you could basically take the current will architecture and scale it up to a PAAS without many changes needed. Horizontal scaling should be super possible.
  2. Plugin Tests - I'd envisioned this as being bundled with the plugin - either as required test_hello methods for every def hello(), the YAML/Gherkin standard I'd mentioned, or just as a test_my_plugin.py unittest module.

To get it out of my head, the YAML/Gherkin-like thing would be something like:

test basic hello:
  - tester1:  @will hello
  - will: hi!
tests basic french:
  - tester1: @will bonjour
  - will: bonjour!

That said, in my mind that syntax could quickly become its own problem (maintaining parsers, testing more complex interactions, etc), and having something way more technical that just uses python test libs might be a much more robust system. In that world, I'd see creating an io backend that just outputted to an accessible internal object (or even a prefixed set of keys in storage), and then writing a few wrapper methods:


# Complete and utter pseudocode
class ExampleTestClass:
    def when_i_say(self, message, user=None):
         if not user:
             user = self.default_user
         self.say(message, user)

    def expect_to_hear(self, message):
          last_message = self.messages[-1]
          self.assertEquals(message, last_message)

HelloTests(ExampleTestClass):
    def test_hello(self):
        self.when_i_say("@will hello")
        self.expect_to_hear("hi!")

Super inspired to see all this movement, and excited to see what's next!

BrianGallew commented 4 years ago

Given the number of outstanding PRs and lack of progress in the last 5 months, should I assume this project has died and move on to something else?

dpritchett commented 3 years ago

@skoczen for a testing harness I can vouch for the patterns used by Lita. It's Ruby but the two bots are similar enough that it should be workable in Python as well.

It uses the model you've suggested above — a dummy io adapter that allows for simple assertions like "bot should respond to prompts of abc with responses def or ghi".

Here are some examples of Lita tests from a separate project of mine: https://github.com/dpritchett/lita-task-scheduler/blob/master/spec/lita/handlers/task_scheduler_spec.rb

Ashex commented 3 years ago

Commenting here as I'm going to try and get some movement even if it is at a glacier pace. Now that we've got #428 underway and ready to be merged I'll be cutting a new 2.2.0 release to mark this major rewrite. This also brings our minimum python requirement to 3.6 and therefore I'll be looking at dependencies and task myself with moving them up to releases that deprecate 2.7 at minimum.

So short term will be housekeeping and taking care of minor bugs.

pastorhudson commented 3 years ago

What documentation needs updated for the release? I'm happy to contribute.

Ashex commented 3 years ago

Some things I've thought of so far:

I'd also like to get travis running again, I'm really just figuring this out on the fly and am creating an issue for each thing that needs to be addressed for this release.

pastorhudson commented 3 years ago

I have fought the Travis dragon in my own fork. Let me know if you get stuck or want help.

pastorhudson commented 3 years ago

@Ashex I went to look at travis-ci and was reminded they have decided to kill travis-ci.org. It might make sense to just switch our testing to github actions. I'm using it on other projects and it works well enough. I'm glad to work on this if you like.

Edit: According to https://travis-ci.community/t/open-source-at-travis-ci-an-update/10674 they are still offering free ci to opensource. I'll submit a PR to make travis work again.

brandonsturgeon commented 3 years ago

I've done a lot with GH Actions lately, I'd be happy to lend a hand. Does the .travis.yml contain everything that needs to be converted?

pastorhudson commented 3 years ago

@brandonsturgeon We could use automated docker builds! I think I have tests ready to go with #432 Just waiting for merge.

Ashex commented 3 years ago

Just merged it, apologies for the delay! the GH actions tests ran:

https://github.com/skoczen/will/runs/2184019384

Ashex commented 3 years ago

Will is alive, I'm poking it when able and will respond to any new issues opened. Closing this out.