artsy / crm-poc

Repo to house proof of concept code and docs for a CRM Admin UI
1 stars 1 forks source link

Craig's departing CRM doc #5

Open craigspaeth opened 6 years ago

craigspaeth commented 6 years ago

CRM Proof of Concept

This issue will document my efforts to investigate what a CRM Product at Artsy could look like. I explored this project from both a product & technology angle. I talked with Stas, Becca, and Sara P. to get an idea of the collector tool related needs across various stakeholder teams and opened up an engineering discussion for what it would mean to build a tool like this.

Hopefully, these docs are a useful reference for engineers, designers, and/or other product people to get a taste of what this "CRM" thing is that various people talk about. By no means are these prototypes or docs meant to be implemented verbatim, but I hope they are a helpful reference when someone decides to get the ball rolling on a project like this.

Happy New Year Y'all 🎁

Product

"CRM" or customer collector relationship management is an idea that resonates with many different teams at Artsy—especially in 2018 and beyond as Artsy focuses more on the "Buy" of Buy/Sell/Learn/Strengthen. The concept of CRM is an abstract idea that has been used to describe persona segmentation efforts in Marketing, permission management tools for Auctions, backend services for Engineering, and everything in between and around collectors. This creates a lot of confusion around what "CRM" means as a product—especially since there is no existing product to refer to.

I'll first attempt to document some of the different "CRM" definitions and needs from each team, then identify what an in-house CRM product could help these teams with.

Definition and Needs of Teams

Marketing

Marketing's concept of CRM primarily revolves around the idea of segmenting personas and sending targeted communications. These personas break down into something like "Aficionado" to "Experienced Collector" and are a way of defining how experienced a collector is and therefore what kinds of marketing strategies we apply as we hope to progress them down the persona funnel into art buyers (kinda like collector level).

Marketing is very happy with their third-party tools and workflows to accomplish these goals, however, they often struggle when attempting to aggregate and rollup data. For instance, they'd want to rollup data from Gravity, Constellation, Segment/Redshift, etc. and expose it as an XML feed to Sailthru for email targeting. They also have trouble getting a single view of things like email settings when it's managed across ST, Gravity, and other places.

CRT GFI

CRT GFI has some of the most stakes in an in-house CRM product as for them it means something that can tie together a ton of disparate tools from Looker, admin.artsy.net, Impulse, and more. They are often trying to do the most Artsy-specific work as they manage things like creating targeted offers, peaking at collector's favorites/purchases, and other ways to keep the GMV flowing through the inquiry machine. Even customer support things like resetting passwords. CRM means cool new possibilities to amplify their effectiveness like surfacing "trust scores", or "likeliness to purchase an artwork" through data science.

An in-house CRM product could give this team a single view of a user, streamline their workflow by consolidating and replacing the arduous many apps they jump between and giving a home for new data-science scores/dashboards.

CRT Auctions

CRT Auctions idea of a CRM is a lot like GFI, but with a leaning towards auctions registration and bidder permission management. There is less emphasis on things like inquiry and conversations but there is still a lot of jumping between Auctions' tools/services e.g. Ohm, Convection, etc.

Like CRT GFI they could benefit a lot from streamlining workflow by consolidating to one tool and getting a single view of a user to get a better context for determining trustworthiness and strategizing things like underbidder offers.

Content

I didn't get a chance to talk with Content much, but their idea of CRM I believe is much more around managing events and attendees. When Content hosts Onsite and other IRL events they want to manage lists of collectors who are not yet users on Artsy.

An in-house CRM product could help upgrade spreadsheets of collectors into a database of potential Artsy users. Centralizing this data could allow for more sophisticated ways to nurture these relationships through email lists and setting up Artsy accounts for attendees.

Engineering

The idea of CRM to Engineering largely revolves around Constellation and the idea of creating a microservice that houses collector data (as opposed to the "user account", e.g. auth, data in Gravity). This would help give a home for new collector data and contribute to chipping away at the Gravity monolith by separating concerns.

An in-house CRM product would clean up architecture and give Engineering a central place to write new collector model code and a faster environment to iterate on collector-related features. It would also clean up tech debt from old admin tools like Torque.

Summary

This is a large surface area with lots of disparate needs and ROIs. Product doesn't have the bandwidth to serve all these needs in-house, but a little bit of foundational work can go a long way to centralizing these needs in one place and giving a canonical tool to put collector/user-related features.

What Artsy needs for an in-house CRM product are two things:

  1. Backend: A central place for collector/user data to flow through that can be a home for integrating with third-party backends, a place to put new data fields, or add imported spreadsheet data.
  2. Frontend: An admin tool product that has plenty of levels of information architecture to house whatever form fields, dashboards, and workflow tools that third-party tools can't do an adequate job of handling.

Design

I'll attempt to describe/document a rough idea of what a CRM admin tool could look like.

Information Architecture

We'll want to make sure there are plenty of levels to nest data and form fields without being cluttered. CMS has a good structure for this, so we could roughly follow that "four levels deep" structure with...

  1. a left sidebar navigation
  2. list view pages
  3. detail pages with a sub-nav and sub-pages
  4. left-hand headers marking groups of form fields/data

A large majority of deeply nested functionality would live on the detail page of a person in this tool, so it might not be as necessary to use all of those levels of navigation elsewhere. Many workflow tools or views across many people can be pulled out into their own sidebar navigation items.

Rough sitemap

Select Feature Ideas

Thread

The "thread" concept would be an attempt to replace features like "user notes" in Torque, and address requested functionality such as being able to see who changed certain fields. This feature could act as a kind of changelog meets Facebook-like feed of the latest activity around a person.

Metadata, Communications, Interests, and Commercial

These hope to be tabs full of mostly form fields or static data. Stas has a spreadsheet/wireframe here that goes into detail for all these items. This organization just hopes to group the fields from Stas's doc with other related concepts. The two right columns of the grid can be used to house other UIs besides form fields too. For instance, maybe we want a "masonry" layout next to a "Interests > Artworks" header that gives a nice overview with images of the works a collector is in interested in with little icons indicating "favorited", "offered", "collects", etc.

Feeds

Marketing primarily works across many users and leverages lots of third-party tools. The "feeds" placeholder is a vague idea of addressing these needs by giving a home to see endpoints we expose to something like Sailthru for debugging purpose. A more advanced form could be a self-serving CRUD UI for copying Looker queries into CSV/XML feeds. The idea here is just to give some place to visualize the data pipelines we're managing with teams like Marketing and give them a place to go when they need to work with collector data—even if it's just a link-bank to other tools or documentation.

Visual Framework

Visually this tool can try to borrow patterns from CMS and Writer when getting inspiration for UI components. The pages illustrated above use a simple three column responsive grid with 30px gutters and 60px padding from the 200px-wide sidebar.

Technology

Goals

I consolidated a rough list of goals from people's comments in the thread discussing possible tech. directions (thanks all!).

  1. Rapid development experience
  2. Consistent with modern front-end patterns
  3. Reuse of familiar and un-inventive tools
  4. Flexible and extensible
  5. Good testing tools and test coverage
  6. Simple and sensible separation of concerns between back-end and front-end*

* There were two directions advocated for here—either strict dogfooding of APIs or talking to the repo's database directly and keeping external API calls minimal

It became clear that the team was pretty divided among a variety of directions this kind of project could take. Despite a lack of clear consensus, I attempted to come up with a direction I think best addresses these six goals above in a way that I think fits well with where tech. choices are headed at Artsy. The TLDR; of that is minimally tricked out separate Node and Next.js apps that send API calls to GraphQL APIs for its data needs—or a "contemporary Force for admin tools".

image

Reasoning

  1. Simple and sensible separation of concerns between back-end and front-end*

First I think it's important to come to a conclusion about the last goal of backend vs. frontend separations. I agree with people that it's an easy and pragmatic solution to leverage OOTB talk-to-database-directly solutions like Rails + simple-form to build admin UIs. My experience with Torque/Inertia has also shown me what a mistake it can be to be too inventive and overcomplicated with admin UIs that have the freedom to compromise on UX and rigor. That said, API and Javascript tools have massively improved since the Torque days and it can be nearly as easy and robust to build a bunch of form fields in a full-stack JS app talking to APIs nowadays. I also think while we can and should compromise on the UX in admin tools to keep implementations simpler, I don't think we should compromise UX-wise on having admin tools talk to many services. Talking with the various teams about their pain points in current workflows, it became clear that we're often underserving them by putting a bunch of scattered active admins on top of Rails backends, and therefore encouraging them to go back to spreadsheets. As a very MVP UI meant to just give access to CRUD the data, it might be okay—but at that point, we might as well just teach them how to use GraphiQL.

On top of that, I very much agree with the dB & Bezo's API dogfooding dogma. The "eat your own dog food when you build an API" approach comes with a lot of the same benefits we talk about with open source by default. Such as how it encourages creating things to a standard that is ready to be consumed by third parties, e.g. good documentation, code quality, and ease of groking. It also, as dB points out, should not cost significantly more to take this route. Taking the other route means making it easy to take shortcuts in designing data models, the way those models are exposed to UIs, the authorization concerns around it, and so on. The moment you need to send a piece of data written in the non-dogfooding way to a separate frontend or microservice you end up rewriting all of that code to the API-driven way—or worse you end up maintaining the expanded surface area of both ways of doing things. So I believe it is indeed true that "the separation of concerns quickly pays off". Then there's the off chance that an API could be exposed to the world and be the next AWS—highly unlikely with most APIs, but a nice side-effect of working more efficiently within an organization.

So for those reasons, I'd encourage we always try to dogfood our APIs, preferably GraphQL APIs. Now for the rest of the points 1–5, I'll explain below:

  1. Rapid development experience

Next brings all of the rapid DX tools from React world into a nice convention-over-config package. That means hot-reloading for UI refreshes within ms of cmd + s. React is a massive ecosystem now with more libraries for UI building than Rails. Next is pretty minimal beyond the tooling foundation though, so app code is actually pretty straightforward React vs. a more rigorous Reaction-like stack. Speaking of Reaction—using that as a shared component library would continue to pay dividends in rapid development down the line, and a project like this would contribute to it as it upgrades its own components to shared Reaction components.

  1. Consistent with modern front-end patterns

Next is a React + Webpack + JS stack that's very consistent with to a lot of React happening all around Artsy apps.

  1. Reuse of familiar and un-inventive tools

While it may take some time for Rails die-hards/JS-hold-outs at Artsy to get used to the Node + React environment—it's very similar to the JS stuff at Artsy and, broadly speaking, JS/Node/React is a way more popular, and therefore familiar, toolset to hire for these days. It will also be up to engineers working on these projects to avoid getting too fancy with things and not introduce new things like Apollo, Relay, Storybooks, etc. so as to keep things close to the "vanilla Next way" and make it easy to google answers.

  1. Flexible and extensible

Next/React provides isomorphic JS capabilities and can be mounted as an express app. I encourage a modular "Force-like" structure to give it more flexibility so sub-apps can deviate from the established patterns—but even without that, Next on its own has very solid solutions for the simplest to most complex UIs.

  1. Good testing tools and test coverage

I added Jest and Enzyme (standard React testing tools) here with some examples for testing. These tools are way ahead of the UI testing tools Rails gives and can make it a pleasure to write fast and simple tests—and therefore encourage devs to write plenty of coverage. If we really wanted to add end-to-end tests at one point, then Force has a good example of that using Nightmare.js.

Backend Story

The backend story for a CRM project I've spent less time with as that ball is already rolling and revolving around Constellation. To say a couple things about where I could see it going—

Concluding

I want to mention again that this is just some documentation of my exploration in this project before departing. At the end of the day, I hope we have a strong team of product and engineering folks taking the reins together on this project. That team should feel empowered to drive an in-house CRM product to be the best design it can be and I only hope that some of this documentation is helpful in kick-starting that process when it comes.

Good luck and happy 2018 :raised_hands:

alloy commented 6 years ago

What a great document @craigspaeth, thanks so much for spending most of your time thinking through things rather than writing a bunch of code 👏

I agree with pretty much everything, except for this part:

It will also be up to engineers working on these projects to avoid getting too fancy with things and not introduce new things like Apollo, Relay, Typescript, Storybooks, etc. so as to keep things close to the "vanilla Next way" and make it easy to google answers.

I think we should be standardising more on a set of tools that works for all platforms, not less, for our engineers to be able to work on projects across platforms easily. Vanilla JS is in my opinion not a good choice for e.g. our mobile app(s), we need a bit more safety there as we can’t deploy all the time.

In addition, if many components will reside in Reaction than not using TS in this repo doesn’t really serve the goal of being able to hire any JS dev anyways and it’s only getting rid of the extra DX you get that TS offers for no benefit.

Most of my thoughts about standardisation for all JS platforms we have and will have apply to data fetching, storybooks, etc as well, and I’d add styled-components to that mix.

orta commented 6 years ago

Hah - yep, this is the Peril stack, nice (though I also sit with Alloy on vanilla JS vs TS, the lack of complexity is not worth the lack of safety/DX)

craigspaeth commented 6 years ago

Good points about TS and the need to learn it for Reaction—👍 to using that in a stack like this then. I did already add styled-components for Reaction-integration to this prototype, so probably makes sense to go that one step further.

In that case, I'd just encourage not getting too fancy with the typing features of TS so as to lower the bar for folks coming from the more Ruby/Rails side. I think admin tools can afford less safety/IDE features in favor of a lower barrier to entry. Anecdotally I've felt that TS is one of the harder pills to swallow for people jumping from Rails to the JS/React side—even if it's more of an emotional feeling of going from the cool hacker world of text editor + dynamic-lang to the enterprisey world of IDE + typed-lang—I think that's still a valid concern we should help ease ppl into.

I also hesitate a tiny bit about the overhead TS adds to integrating with the rest of the tools/ecosystem. For instance, I tried to add TS to this project and struggled a bit to integrate it with Next's WebPack. I also wouldn't, for instance, want people having to contribute to definitely-typed because they needed to add some tiny third-party date widget to a field. That said, as long as we have folks like Orta with the bandwidth and motivation to jump in and help teach + unblock people on these things then it shouldn't be a problem. There's plenty of momentum behind TS too, so I think it has a strong future overall—just figured vanilla-ish JS might be an easier lowest-common-denominator to work with.

In any case, it's up to you all now! Do what the team feels is best, and god-speed my friends! 🚀 🙌

damassi commented 6 years ago

Just to chime in really quick: having spent a little bit of time with TypeScript I have to agree that it feels like the way to go assuming its not too strict. That means: allow for interop with .js files (so that test-writing is more fluid, for example; MyComponent.test.js can import MyComponent.ts), allowing for any -- and so on. With @types/repo-name we rarely have to worry about writing custom .d.ts definitions, and with VSCode most interop feels seamless. Once the flow is captured the benefits are too numerous to ignore, and either way complex tooling is going to be a given. Devs will only get better at working with Types (not to mention increased aptitude in an increasingly typed-js world).

But that said, in my experience I've never worked with a framework / stack that has been more flexible and easily adaptable to change than Force / Positron, so following those principles will take us a long way: JS should be the base, and TS, which is added on top, can follow. That way we're starting with a broader brush should the world suddenly take another u-turn (as it surely will).

alloy commented 6 years ago

Agreed on not getting too strict 👍