For context, it's helpful to review Understanding Connectors first. With that in mind, we recommend the following:
- Deploy as many Connectors as you need; at least one per network segment. There is no advantage to forcing network traffic to flow through a single or smaller number of Connectors. If traffic must flow to a Resource on a different network segment to the Connector, then you should configure and deploy another Remote network / Connector combination on your Twingate network.
- For redundancy, deploy at least two Connectors in every network segment. Multiple Connectors in the same Twingate Remote network are automatically clustered, which means that any Connector can forward traffic to Resources on the same Twingate Remote network.
- Connectors on the same Twingate Remote network should have the same network permissions. Because Connectors on the same Twingate Remote network are clustered, they should be considered interchangeable. Network permissions should be the same for every Connector to ensure that Resources are always accessible, regardless of which Connector is in use.
- Connectors should be deployed as close to Resources as possible. A significant benefit of establishing your Twingate network is that traffic is routed directly from users' devices to the Resource they are accessing. Because Connectors serve as the destination exit point for traffic, it's important that the "last mile" between the Connector and any Resources it serves is as short as possible to provide users with the best performance.
Regardless of your chosen deployment method, the following principles apply:
- Connectors only require outbound Internet access. Inbound Internet access to a Connector host is neither required nor recommended from a security standpoint. If you wish to limit outbound ports, you may limit to 443, and the inclusive range of 30000-31000.
- Ensure that Connectors have both permission and routing rules to access private Resources. Resources that you configure in the Admin console will be resolved and routed from the Connector host.
These same principles apply if you are using Connectors as public exit nodes, except that you need to ensure that the Connector host has a static public IP assigned to it, either directly or via a NAT Internet gateway.
In general, the host that the connector is running on should be optimized for, in decreasing order of importance:
- Network bandwidth
Aside from transferring data, the primary task of the connector is terminating the encrypted TLS tunnel from the Client, which is a CPU-bound task. If the host becomes either bandwidth or CPU-bound, you can deploy additional connectors within the same Remote network.
Below we have some platform-specific machine recommendations for Connectors, and we will add more in future.
t3a.micro Linux EC2 instance is a cost-effective choice and sufficient to handle bandwidth for at least 50 remote users under typical usage patterns.
g1-small machine is sufficient as a starting point for at least 50 remote users under typical usage patterns.
For Azure, we recommend using their Container Instance service, which does not require hardware selection.
A Linux VM allocated with 1 CPU and 2GB RAM is sufficient to handle at least 50 remote users under typical usage patterns.
- Any Linux flavor that supports Docker is acceptable.
- Another option is to run Docker on Windows. More detailed information can be found on our Windows documentation page.
Updated about a month ago