Twingate

Deploying Connectors

For context, it's helpful to review Understanding Connectors first. With that in mind, we recommend the following:

  1. Deploy as many Connectors as you need based on your network architecture. There is no advantage to forcing network traffic to flow through a single or smaller number of Connectors; the advantage of Twingate is that traffic will be routed directly to the Connector that can access the Resource in question. As a general rule of thumb, if traffic must flow to a Resource on a different network segment to the Connector, then you should configure and deploy another Remote network / Connector combination on your Twingate network.
  2. For redundancy, always deploy a minimum of two Connectors per Remote Network. Multiple Connectors in the same Twingate logical Remote network are automatically clustered for load balancing and automatic failover, which means that any Connector can forward traffic to Resources on the same Twingate Remote network.
  3. Each connector must be provisioned its own token set. Each Connector you deploy must have its own unique set of tokens. Tokens are generated when you provision a Connector in the admin console, so you should provision one Connector per Connector deployed. Sharing tokens between different Connectors will cause errors.
  4. Each connector must be provisioned its own token set. Each Connector you deploy must have its own unique set of tokens. Tokens are generated when you provision a Connector in the admin console, so you should provision one Connector per Connector deployed. Sharing tokens between different Connectors will cause errors.
  5. Connectors on the same Twingate Remote network should have the same network scope and permissions. Because Connectors on the same Twingate Remote network are clustered, they should be considered interchangeable. Network permissions should be the same for every Connector to ensure that Resources are always accessible, regardless of which Connector is in use.
  6. Connectors should be deployed as close to Resources as possible. A significant benefit of establishing your Twingate network is that traffic is routed directly from users' devices to the Resource they are accessing. Because Connectors serve as the destination exit point for traffic, it's important that the "last mile" between the Connector and any Resources it serves is as short as possible to provide users with the best performance.

Network Considerations

Regardless of your chosen deployment method, the following principles apply:

  1. Connectors only require outbound Internet access. Inbound Internet access to a Connector host is neither required nor recommended from a security standpoint. If you wish to limit outbound ports, you may limit to TCP 443, and the inclusive range of TCP 30000-31000. No UDP ports are required.
  2. Ensure that Connectors have both permission and routing rules to access private Resources. Resources that you configure in the Admin console will be resolved and routed from the Connector host.

These same principles apply if you are using Connectors as public exit nodes, except that you need to ensure that the Connector host has a static public IP assigned to it, either directly or via a NAT Internet gateway. This is the IP address you'll use to whitelist in the service to be use with the exit node.

Hardware Considerations

In general, the host that the connector is running on should be optimized for, in decreasing order of importance:

  • Network bandwidth
  • Memory
  • CPU

If a connector host becomes resource-bound, you can deploy additional connectors within the same Remote network. Twingate will automatically load balance across multiple connectors in the same logical Remote network that you define within Twingate.

Below we have some platform-specific machine recommendations for Connectors.

AWS

A t3a.micro Linux EC2 instance is a cost-effective choice and sufficient to handle bandwidth for at least a hundred remote users under typical usage patterns.

Google Cloud

A g1-small machine is sufficient as a starting point for at least a hundred remote users under typical usage patterns.

Azure

For Azure, we recommend using their Container Instance service, which does not require hardware selection.

On-premise / Colo / VPS

A Linux VM allocated with 1 CPU and 2GB RAM is sufficient to handle at least a hundred remote users under typical usage patterns.

  • Any Linux distribution that supports deployment via systemd, Docker or Helm on an x86-based system is acceptable. Connector installation on ARM-based systems is not currently supported.

Connector Load Balancing & Failover

If you deploy multiple Connectors within the same logical Remote Network, Twingate will detect this and you will automatically benefit from Connector load balancing and failover redundancy.

When there is more than one Connector in a Remote Network, Twingate takes advantage of the redundancy and will automatically distribute clients connecting to resources on that Remote Network among the various Connectors for load balancing purposes. If a client is unable to connect to a particular Connector (e.g. because the machine on which the Connector is installed goes offline), the client will try connecting to other Connectors in the same Remote Network until a connection is successfully established.

If a Connector is later removed from or added to a Remote Network, Twingate will automatically adjust for that change.

Updated about a month ago


Deploying Connectors


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.