Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
A quick and simple guide to get started with VerneMQ
VerneMQ is a high-performance, distributed MQTT message broker. It scales horizontally and vertically on commodity hardware to support a high number of concurrent publishers and consumers while maintaining low latency and fault tolerance. To use it, all you need to do is install the VerneMQ package.
Choose your OS and follow the instructions:
It is also possible to run VerneMQ using our Docker image:
If you built VerneMQ from sources, you can add the /bin
directory of your VerneMQ release to PATH
. For example, if you compiled VerneMQ in the /home/vernemq
directory, then add the binary directory (/home/vernemq/_build/default/rel/vernemq/bin
) to your PATH, so that VerneMQ commands can be used in the same manner as with a packaged installation.
To start a VerneMQ broker, use the vernemq start command in your Shell:
A successful start will return no output. If there is a problem starting the broker, an error message is printed to STDERR
.
To run VerneMQ with an attached interactive Erlang console:
A VerneMQ broker is typically started in console mode for debugging or troubleshooting purposes. Note that if you start VerneMQ in this manner, it is running as a foreground process that will exit when the console is closed.
You can close the console by issuing this command at the Erlang prompt:
Once your broker has started, you can initially check that it is running with the vernemq ping command:
The command will respond with pong
if the broker is running or Node <NodeName> not responding to pings
in case it’s not.
As you may have noticed, VerneMQ will warn you at startup when your system’s open files limit (ulimit -n
) is too low. You’re advised to increase the OS default open files limit when running VerneMQ. Read more about why and how in the Open Files Limit documentation.
If you use a systemd
service file (as in the binary packages), you can start VerneMQ using the systemctl
interface to systemd
:
Other systemctl
commands work as well:
As well as being available as packages that can be installed directly into the operating systems, VerneMQ is also available as a Docker image. Below is an example of how to set up a couple of VerneMQ
To use the provided docker images the VerneMQ EULA must be accepted. See Accepting the VerneMQ EULA for more information.
Sometimes you need to configure a forwarding for ports (on a Mac for example):
This starts a new node that listens on 1883 for MQTT connections and on 8080 for MQTT over websocket connections. However, at this moment the broker won't be able to authenticate the connecting clients. To allow anonymous clients use the DOCKER_VERNEMQ_ALLOW_ANONYMOUS=on
environment variable.
Warning: Setting allow_anonymous=on
completely disables authentication in the broker and plugin authentication hooks are never called! See more information about the authentication hooks here.
This allows a newly started container to automatically join a VerneMQ cluster. Assuming you started your first node like the example above you could autojoin the cluster (which currently consists of a single container 'vernemq1') like the following:
(Note, you can find the IP of a docker container using docker inspect <CONTAINER_NAME> | grep \"IPAddress\"
).
To check if the above containers have successfully clustered you can issue the vmq-admin
command:
MQTT consumers can share and loadbalance a topic subscription.
Consumer session balancing has been deprecated and will be removed in VerneMQ 2.0. Use Shared Subscriptions instead.
Sometimes consumers get overwhelmed by the number of messages they receive. VerneMQ can load balance between multiple consumer instances subscribed to the same topic with the same ClientId.
To enable session balancing, activate the following two settings in vernemq.conf
Currently those settings will activate consumer session balancing globally on the respective node. Restricting balancing to specific consumers only, will require a plugin. Note that you cannot balance consumers spread over different cluster nodes.
On every VerneMQ node you'll find the vmq-admin
command line tool in the release's bin directory (in case you use the binary VerneMQ packages, vmq-admin
should already be callable in your path, without changing directories). It has different sub-commands that let you check for status, start and stop listeners, re-configure values and a couple of other administrative tasks.
vmq-admin
has different sub-commands with a lot of respective options. You can familiarize yourself by using the --help
option on the different levels of vmq-admin
. You might see additional sub-commands in case integrated plugins are running (vmq-admin bridge
is an example).
vmq-admin
works by RPC'ing into the local VerneMQ node by default. For most commands you can add a --node
option and set values on other cluster nodes, even if the local VerneMQ node is down.
To check for the global cluster state in case the local VerneMQ node is down, you'll have to go to another node though.
vmq-admin
is a live re-configuration utility. Please note that all dynamically configured values will be reset by vernemq.conf upon broker restart.
As a consequence, it's good practice to keep track of the applied changes when re-configuring a broker with vmq-admin
. If needed, you can then persist changes by adding them to the vernemq.conf file.
You can configure as many listeners as you wish in the vernemq.conf file. In addition to this, the vmq-admin listener
command let's you configure, start, stop and delete listeners on the fly. Those can be MQTT, WebSocket or Cluster listeners, in the command line output they will be tagged mqtt, ws or vmq accordingly.
To get info on a listener sub-command, invoke it with the --help option. Example: vmq-admin listener start --help
Listeners configured with the vmq-admin listener
command will not survive a broker restart. Live changes to listeners configured in vernemq.conf are possible, but the vernemq.conf listeners will just be restarted with a broker restart.
This will start an MQTT listener on port 1884
and IP address 192.168.1.50
. If you want to start a WebSocket listener, just tell VerneMQ by adding the --websocket
flag. There are more options, mainly for configuring SSL (use vmq-admin listener start --help
).
You can isolate client connections accepted by a certain listener from other clients by setting a mountpoint.
To start an MQTT listener using defaults, just set the port and IP address as a minimum.
You can add the -k
or --kill_sessions
switch to that command. This will disconnect all client connections setup by that listener. In combination with a mountpoint, this can be useful for terminating clients for a specific application, or to force re-connects to another cluster node (to prepare for a cluster leave for your node).
VerneMQ uses Google's LevelDB as a fast storage backend for messages and subscriber information. Each VerneMQ node runs its own embedded LevelDB store.
There's not much you need to know about LevelDB and VerneMQ. One really important thing to note is that LevelDB manages its own memory. This means that VerneMQ will not allocate and free memory for LevelDB. Instead, you'll have to tell LevelDB how much memory it can use up by setting leveldb.maximum_memory.percent
.
Configuring LevelDB memory:
LevelDB means business with its allocated memory. It will eventually end up with the configured max, making it look like there's a memory leak, or even triggering OOM kills. Keep that in mind when configuring the percentage of RAM you give to LevelDB. Historically, the configured default was at 70% percent of RAM, which is too high for a lot of use cases and can be safely lowered.
(e)LevelDB exposes a couple of additional configuration values that we link here for the sake of completeness. You can change all the values mentioned in the eleveldb schema file. VerneMQ mostly uses the configured defaults, and for most use cases it should not be necessary to change those.
VerneMQ uses the Erlang distribution mechanism for most inter-node communication. VerneMQ identifies other machines in the cluster using Erlang identifiers (e.g. VerneMQ@10.9.8.7
). Erlang resolves these node identifiers to a TCP port on a given machine via the Erlang Port Mapper daemon (epmd) running on each cluster node.
By default, epmd binds to TCP port 4369 and listens on the wildcard interface. For inter-node communication, Erlang uses an unpredictable port by default; it binds to port 0, which means the first available port.
For ease of firewall configuration, VerneMQ can be configured to instruct the Erlang interpreter to use a limited range of ports. For example, to restrict the range of ports that Erlang will use for inter-Erlang node communication to 6000-7999, add the following lines to vernemq.conf on each VerneMQ node:
The settings above are only used for distributing subscription updates and maintenance messages. For distributing the 'real' MQTT messages the proper vmq
listener must be configured in the vernemq.conf.
It isn't necessary to configure the same port on every machine, as the nodes will probe each other for this information.
Attributions:
This section, "VerneMQ Inter-node Communication", is a derivative of Security and Firewalls by Riak, used under Creative Commons Attribution 3.0 Unported License.
Managing VerneMQ live config values.
You can dynamically re-configure most of VerneMQ's settings on a running node by using the vmq-admin set
command.
The following config values can be handled dynamically: