如何将松散的机器人扩展到1000个团队

[英]How to scale a slack bot to 1000's of teams


To implement a slack bot, i need to deal with 'Real Time Messaging API' of slack. It is a WebSocket-based API that allows you to receive events from Slack in real time and send messages as user. more info: https://api.slack.com/rtm

为了实现一个松散的机器人,我需要处理松弛的“实时消息传递API”。它是一个基于WebSocket的API,允许您实时从Slack接收事件并以用户身份发送消息。更多信息:https://api.slack.com/rtm

To create a bot for only one team, i need to open one websocket connection and listen it for events.

要仅为一个团队创建机器人,我需要打开一个websocket连接并监听事件。

To make available the slack bot for another team. I need to open a new websocket connection. So,

为另一个团队提供slack bot。我需要打开一个新的websocket连接。所以,

  • 1 team => 1 websocket connection
  • 1队=> 1 websocket连接
  • 2 teams => 2 websocket connections
  • 2队=> 2个websocket连接
  • N teams => N websocket connections
  • N队=> N websocket连接

what should i do to scale my websocket connections for endless teams?

我该怎么做才能为无尽的团队扩展我的websocket连接?

What kind of architecture can handle autoscaling of 1000’s of websockets connections?

什么样的架构可以处理1000个websockets连接的自动缩放?

1 个解决方案

#1


7  

With slack sockets, you have lots of things to scale:

使用松散的套接字,您需要进行大量扩展:

  • Number of sockets. This is easy because even cheap servers can handle thousands of sockets, like more than 50k. But each socket represents a couple other types of load, listed next.
  • 套接字数量。这很容易,因为即使是便宜的服务器也可以处理数千个套接字,例如超过50k。但是每个套接字代表了下面列出的几种其他类型的负载。
  • Amount of memory used per team, which depends on your own server implementation. If you are trying to keep a large amount of message history in memory, you will hit your server's limit faster than if your message processing code is somewhat stateless.
  • 每个团队使用的内存量,具体取决于您自己的服务器实现。如果您尝试在内存中保留大量的邮件历史记录,那么与您的邮件处理代码在某种程度上无状态相比,您将更快地达到服务器的限制。
  • Amount of I/O, which might make you want to offload any image serving to a separate load balancer.
  • I / O量,可能会使您想要将任何映像服务卸载到单独的负载均衡器。

The other thing to consider is fault-tolerance. Let's say you did sticky load balancing and one of your servers is handling 50 teams. That server is the only one handling those 50 teams so if it goes down then all 50 bots go offline. Alternatively, you can open up multiple sockets per team on separate servers and use a message handling queue so that each message is only responded to once.

另一件需要考虑的是容错。假设你做了粘性负载平衡,你的一个服务器正在处理50个团队。该服务器是唯一一个处理这50个团队的服务器,因此如果它下降,那么所有50个机器人都会脱机。或者,您可以在不同的服务器上为每个团队打开多个套接字,并使用消息处理队列,以便每个消息仅响应一次。

So the architecture I would propose is a thin, redundant load balancer for RTM sockets as a first layer, and a reliable message queue underneath that.

因此,我建议的架构是一个瘦的冗余负载均衡器,用于RTM套接字作为第一层,以及一个可靠的消息队列。


注意!

本站翻译的文章,版权归属于本站,未经许可禁止转摘,转摘请注明本文地址:http://www.itdaan.com/blog/2016/04/04/2680b62d7444b517df22cdd12d3096e4.html



 
© 2014-2018 ITdaan.com 粤ICP备14056181号