Suggest API – Slack Engineering

Slack, as a product, presents many alternatives for suggestion, the place we will make strategies to simplify the person expertise and make it extra pleasant. Every one looks as if a terrific use case for machine studying, nevertheless it isn’t lifelike for us to create a bespoke answer for every.

As an alternative, we developed a unified framework we name the Suggest API, which permits us to shortly bootstrap new suggestion use instances behind an API which is definitely accessible to engineers at Slack. Behind the scenes, these recommenders reuse a standard set of infrastructure for each a part of the advice engine, corresponding to knowledge processing, mannequin coaching, candidate era, and monitoring. This has allowed us to ship a variety of totally different suggestion fashions throughout the product, driving improved buyer expertise in a wide range of contexts.

Extra than simply ML fashions

We goal to deploy and keep ML fashions in manufacturing reliably and effectively, which is termed MLOps. This represents nearly all of our crew’s work, whereas mannequin coaching is a comparatively small piece within the puzzle. In case you take a look at Matt Turck’s 2021 review speaking concerning the ML and knowledge panorama, you’ll see for every part of MLOps, there are extra instruments than you may select from in the marketplace, which is a sign that trade requirements are nonetheless creating within the space. As a matter of reality, reviews present a majority (as much as 88%) of company AI initiatives are struggling to maneuver past take a look at phases. Corporations corresponding to Facebook, Netflix, Uber often implement their very own in-house programs, which has similarities right here in Slack.

Happily, most frequently we don’t must trouble with choosing the proper software, because of Slack’s well-maintained knowledge warehouse ecosystem that enables us to:

  • Schedule varied duties to course of sequentially in Airflow
  • Ingest and course of knowledge from the database, in addition to logs from servers, shoppers, and job queues
  • Question knowledge and create dashboards to visualise and monitor knowledge
  • Search for computed knowledge from the Function Retailer with main keys
  • Run experiments to facilitate function launching with the A/B testing framework

Different groups in Slack, corresponding to Cloud Companies, Cloud Foundations, and Monitoring, have supplied us with further infrastructure and tooling which can be essential to construct the Suggest API.

The place is ML utilized in Slack?

The ML providers crew at Slack companions with different groups throughout the product to ship impactful and pleasant product adjustments, wherever incorporating machine studying is sensible. What this seems to be like in follow is a variety of totally different quality-of-life enhancements, the place machine studying smoothes tough edges of the product, simplifying person workflows and expertise. Listed below are a number of the areas the place you will discover this kind of suggestion expertise within the product:

An vital results of the Suggest API — even except for the present use instances that may be discovered within the product — are the close to equal variety of use instances we’re at present testing internally, or we have now tried and deserted. With easy instruments to bootstrap new recommenders, we’ve empowered product groups to observe a core product design precept at Slack of “prototyping the trail”, testing and discovering the place machine studying is sensible in our product.

Machine studying, when constructed up from nothing, can require heavy funding and may be fairly hit-or-miss, so beforehand we prevented attempting out many use instances that may have made sense merely out of a concern of failure. Now, we’ve seen a proliferation of ML prototypes by eradicating that up-front price, and are netting out extra use instances for machine studying and suggestion from it.

Unified ML workflow throughout product

With such a wide range of use instances of advice fashions, we have now to be deliberate about how we manage and take into consideration the assorted elements. At a excessive stage, recommenders are categorized in line with “corpus”, after which “supply”. A corpus is a sort of entity — e.g. a Slack channel or person — and a supply represents a specific a part of the Slack product. A corpus can correspond to a number of sources — e.g. Slackbot channel strategies and Channel browser suggestion every correspond to a definite supply, however the identical corpus channel.

Regardless of corpus and use case although, the therapy of every suggestion request is fairly related. At a excessive stage:

  • Our most important backend serves the request, taking in a question, corpus, and supply and returning a set of suggestions that we additionally log.
  • When these outcomes are interacted with in our shopper’s frontend, we log these interactions.
  • Offline, in our knowledge warehouse (Airflow), we mix these logs into coaching knowledge to coach new fashions, that are subsequently served to our backend as a part of returning suggestions.

Here’s what that workflow seems to be like in complete:workflow


Every supply is related to a “recommender” the place we implement a sequence of steps to generate an inventory of suggestions, which:

  • Fetch related candidates from varied sources, together with our embeddings service the place related entities have shut vector representations
  • Filter candidates based mostly on relevancy, possession, or visibility, e.g. personal channels
  • Increase options corresponding to entities’ attributes and actions utilizing our Function Retailer
  • Rating and kind the candidates with the predictions generated by corresponding ML fashions
  • Rerank candidates based mostly on further guidelines

Every of those steps is constructed as a standardized class which is reusable between recommenders, and every of those recommenders is in flip constructed as a sequence of those steps. Whereas use instances may require bespoke new elements for these steps, typically creating a brand new recommender from current elements is so simple as writing one thing like this:

last class RecommenderChannel extends Recommender 
   public perform __construct() 
	   father or mother::__construct(
		   /* fetchers */ vec[new RecommendChannelFetcher()],
		   /* filters  */ vec[new RecommendChannelFilterPrivate()],
		   /* mannequin    */ new RecommendLinearModel(
			   /** additional options to extract **/
		   /* reranker */ vec[new RecommendChannelReranker()],

Information processing pipelines

Moreover with the ability to serve suggestions based mostly on these feedback, our base recommender additionally handles important logging, corresponding to monitoring the preliminary request that was made to the API, the outcomes returned from it, and the options our machine studying mannequin used at scoring time. We then output the outcomes via the Suggest API to the frontend the place person responses, corresponding to clicks, are additionally logged.

With that, we schedule Airflow duties to hitch logs from backend (server) offering options, and frontend (shopper) offering responses to generate the coaching knowledge for machine

Mannequin coaching pipelines

Fashions are then scheduled to be educated in Airflow by working Kubernetes Jobs and served on Kubernetes Clusters. With that we rating the candidates and full the cycle, thereafter beginning a brand new cycle of logging, coaching, and serving once more.

For every supply we frequently experiment with varied fashions, corresponding to Logistic Regression and XGBoost. We set issues up to verify it’s simple so as to add and productionize new fashions. Within the following, you may see the six fashions in complete we’re experimenting with for people-browser because the supply and the quantity of Python code wanted to coach the XGBoost rating mannequin.train

					"goal": "rank:map",
					"n_estimators": 500,


We additionally output metrics in numerous elements in order that we will get an general image on how the fashions are performing. When a brand new mannequin is productionized, the metrics will probably be mechanically up to date to trace its efficiency.

  • Reliability metrics: Prometheus metrics from the backend to trace the variety of requests and errors
  • Effectivity metrics: Prometheus metrics from the mannequin serving service, corresponding to throughput and latency, to verify we’re responding quick sufficient to all of the requests
  • On-line metrics: enterprise metrics which we share with exterior stakeholders. Some most vital metrics we monitor are the clickthrough price (CTR), and rating metrics corresponding to discounted cumulative gain (DCG). On-line metrics are regularly checked and monitored to verify the mannequin, plus the general end-to-end course of, is working correctly in manufacturing
  • Offline metrics: metrics to match varied fashions throughout coaching time and resolve which one we probably wish to experiment and productionize. We put aside the validation knowledge, aside from the coaching knowledge, in order that we all know the mannequin can carry out effectively on knowledge it hasn’t seen but. We monitor widespread classification and rating metrics for each coaching and validation knowledge
  • Function stats: metrics to observe function distribution and have significance, upon which we run anomaly detection to stop distribution shift

Iteration and experimentation

So as to practice a mannequin, we want knowledge, each options and responses. Most frequently, our work will goal energetic Slack customers so we often have options to work with. Nevertheless, with out the mannequin, we received’t be capable to generate suggestions for customers to work together with a view to get the responses. That is one variant of the cold start drawback which is prevalent in constructing suggestion engines, and it’s the place our hand-tuned mannequin comes into play.

Through the first iteration we’ll typically depend on a hand-tuned mannequin which relies on widespread data and easy heuristics, e.g. for send-time optimization, we usually tend to ship invite reminders when the crew or inviter is extra energetic. On the identical time, we brainstorm related options and start extracting from the Function Retailer and logging them. This can give us the primary batch of coaching knowledge to iteratively enhance upon.

We depend on intensive A/B testings to verify ML fashions are doing their job to enhance the advice high quality. At any time when we change from hand-tuned to mannequin based mostly suggestions, or experiment with totally different units of options or extra sophisticated fashions, we run experiments and ensure the change is boosting the important thing enterprise metrics. We are going to typically be taking a look at metrics corresponding to CTR, profitable groups, or different metrics associated to particular components of Slack.hand tuned

Following is an inventory of latest wins we’ve made to the aforementioned ML powered options, measured in CTR.

  • Composer DMs: +38.86% when migrating from hand-tuned mannequin to logistic regression and extra not too long ago +5.70% with XGBoost classification mannequin and extra function set
  • Creator invite stream: +15.31% when migrating from hand-tuned mannequin to logistic regression
  • Slackbot channel strategies: +123.57% for depart and +31.92% for archive strategies when migrating from hand-tuned mannequin to XGBoost classification mannequin
  • Channel browser suggestion: +14.76% when migrating from hand-tuned mannequin to XGBoost classification mannequin. Beneath we will see the affect of the channel browser experiment over the time:trend

Remaining ideas

The Suggest API has been used to serve ML fashions over the past couple of years, although it took for much longer to construct the groundwork of varied providers backing up the infrastructure. The unified method of Suggest API makes it potential to quickly prototype and productionize ML fashions throughout the product. In the meantime, we’re continually bettering:

  • Information logging and preprocessing course of, so that may be prolonged to extra use instances
  • Mannequin coaching infrastructure, e.g. scaling, {hardware} acceleration, and debuggability
  • Mannequin explainability and mannequin introspection tooling using SHAP

We’re additionally reaching out to numerous groups inside the Slack group for extra alternatives to collaborate on new components of the product that might be improved with ML.


We needed to provide a shout out to all of the folks that have contributed to this journey: Fiona Condon, Xander Johnson, Kyle Jablon

Inquisitive about taking over fascinating tasks, making folks’s work lives simpler, or simply constructing some fairly cool kinds? We’re hiring! 💼 Apply now