Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Welcome to the VerneMQ documentation! This is a reference guide for most of the available features and options of VerneMQ. The Getting Started guide might be a good entry point.
For a more general overview on VerneMQ and MQTT, you might want to start with the introduction.
For downloading VerneMQ see Downloads.
VerneMQ supports the WebSocket protocol out of the box. To be able to open a WebSocket connection to VerneMQ, you have to configure a WebSocket listener or Secure WebSocket listener in the vernemq.conf file first:
listener.ws.default = 127.0.0.1:9001
listener.wss.default = 127.0.0.1:9002Keep in mind that you'll use MQTT-over-WebSocket, so you will need a Javascript library that implements the MQTT client behaviour. We have used the Eclipse Paho client as well as MQTT.js
You won't be able to open WebSocket connections on a base URL, always add the /mqtt path.
VerneMQ can be installed on CentOS-based systems using the binary package we provide.
Once you have downloaded the binary package, execute the following command to install VerneMQ:
sudo yum install vernemq-<VERSION>.centos7.x86_64.rpmor:
sudo rpm -Uvh vernemq-<VERSION>.centos7.x86_64.rpmOnce you've installed VerneMQ, start it on your node:
You can verify that VerneMQ is successfully installed by running:
If VerneMQ has been installed successfully vernemq is returned.
Now that you've installed VerneMQ, check out .
Everything you must know to properly configure VerneMQ
Every VerneMQ node has to be configured. Depending on the installation method and chosen platform the configuration file vernemq.conf resides at different locations. If VerneMQ was installed through a Linux package the default location for the configuration file is /etc/vernemq/vernemq.conf.
A single setting is handled on one line.
Lines are structured Key = Value
Any line starting with # is a comment, and will be ignored
You certainly want to try out VerneMQ right away. For that you could disable authentication like so:
Set allow_anonymous = on
By default the vmq_acl authorization plugin is enabled and configured to allow publishing and subscribing to any topic, see for more information.
Set the time in seconds after a QoS=1 or QoS=2 message has been sent that VerneMQ will wait before retrying when no response is received.
retry_interval = 20This option default to 20 seconds.
This option defines the maximum number of QoS 1 or 2 messages that can be in the process of being transmitted simultaneously.
Defaults to 20 messages, use 0 for no limit. The inflight window serves as a protection for sessions, on the incoming side.
The maximum number of messages to hold in the queue above those messages that are currently in flight. Defaults to 1000. Set to -1 for no limit. This option protects a client session from overload by dropping messages (of any QoS).
Defaults to 1000 messages, use -1 for no limit. This parameter was named max_queued_messages in 0.10.*. Note that 0 will totally block message delivery from any queue!
This option specifies the maximum number of QoS 1 and 2 messages to hold in the offline queue.
Defaults to 1000 messages, use -1 for no limit, use 0 if no messages should be stored.
In contrast to the session based inflight window, max_online_messages and max_offline_messages serves as a protection of queues, on the outgoing side.
A quick and simple guide to get started with VerneMQ
VerneMQ is a high-performance, distributed MQTT message broker. It scales horizontally and vertically on commodity hardware to support a high number of concurrent publishers and consumers while maintaining low latency and fault tolerance. To use it, all you need to do is install the VerneMQ package.
Choose your OS and follow the instructions:
It is also possible to run VerneMQ using our Docker image:
To start a VerneMQ broker, use the vernemq start command in your Shell:
A successful start will return no output. If there is a problem starting the broker, an error message is printed to STDERR.
To run VerneMQ with an attached interactive Erlang console:
A VerneMQ broker is typically started in console mode for debugging or troubleshooting purposes. Note that if you start VerneMQ in this manner, it is running as a foreground process that will exit when the console is closed.
You can close the console by issuing this command at the Erlang prompt:
Once your broker has started, you can initially check that it is running with the vernemq ping command:
The command will respond with pong if the broker is running or Node <NodeName> not responding to pings in case it’s not.
As you may have noticed, VerneMQ will warn you at startup when your system’s open files limit (ulimit -n) is too low. You’re advised to increase the OS default open files limit when running VerneMQ. Read more about why and how in the .
service vernemq startrpm -qa | grep vernemqmax_inflight_messages = 20To enable session balancing, activate the following two settings in vernemq.conf
allow_multiple_sessions = on
queue_deliver_mode = balancemax_online_messages = 1000max_offline_messages = 1000vernemq startvernemq consoleq().vernemq pingVerneMQ can be installed on Debian or Ubuntu-based systems using the binary package we provide.
Once you have downloaded the binary package, execute the following command to install VerneMQ:
You can verify that VerneMQ is successfully installed by running:
If VerneMQ has been installed successfully Status: install ok installed is returned.
Once you've installed VerneMQ, start it on your node:
The whereis vernemq command will give you a couple of directories:
Now that you've installed VerneMQ, check out .
Set the maximum size for client ids, MQTT v3.1 specifies a limit of 23 characters.
max_client_id_size = 23This option default to 23.
This option allows persistent clients (those with clean_session set to false) to be removed if they do not reconnect within a certain time frame.
This is a non-standard option. As far as the MQTT specification is concerned, persistent clients are persisted forever.
The expiration period should be an integer followed by one of h, d, w, m, y for hour, day, week, month, and year; or never:
This option defaults to never.
Limit the maximum publish payload size in bytes that VerneMQ allows. Messages that exceed this size won't be accepted.
Defaults to 0, which means that all valid messages are accepted. MQTT specification imposes a maximum payload size of 268435455 bytes.
How to setup and configure the HTTP listener.
The VerneMQ HTTP listener is used to serve various VerneMQ subsystems such as:
By default it runs on port 8888. To disable the HTTP listener or change the port, adapt the configuration in vernemq.conf:
VerneMQ comes with a simple file-based password authentication mechanism which is enabled by default. If you don't need this it can be disabled by setting:
Per default VerneMQ doesn't accept any client that hasn't been configured using vmq-passwd. If you want to change this and accept any client connection you can set:
Many aspects of VerneMQ can be extended using plugins. The standard VerneMQ package comes with several official plugins. You can show the enabled & running plugins via:
The command above displays all the enabled plugins together with the hooks they implement:
This enables the ACL plugin. Because the vmq_acl plugin is already started the above command won't succeed. In case the plugin sits in an external directory you must also to provide the --path=PathToPlugin.
As well as being available as packages that can be installed directly into the operating systems, VerneMQ is also available as a Docker image. Below is an example of how to set up a couple of VerneMQ
Somtimes you need to configure a forwarding for ports (on a Mac for example):
This starts a new node that listens on 1883 for MQTT connections and on 8080 for MQTT over websocket connections. However, at this moment the broker won't be able to authenticate the connecting clients. To allow anonymous clients use the DOCKER_VERNEMQ_ALLOW_ANONYMOUS=on environment variable.
The systree functionality is enabled by default and reports the broker metrics at a fixed interval defined in the vernemq.conf. The metrics defined are transformed to MQTT topics e.g. mqtt_publish_received is transformed to $SYS/<nodename>/mqtt/publish/received. <nodename> is your node's name, as configured in the vernemq.conf. To find it, you can grep the file for it: grep nodename vernemq.conf
The complete list of metrics can be found
This option defaults to 20000 milliseconds.
sudo dpkg -i vernemq-<VERSION>.bionic.x86_64.debPath
Description
/usr/sbin/vernemq:
the vernemq and vmq-admin commands
/usr/lib/vernemq
the vernemq package
/etc/vernemq
the vernemq.conf file
/usr/share/vernemq
the internal vernemq schema files
/var/lib/vernemq
the vernemq data dirs for LevelDB (Metadata Store and Message Store)
persistent_client_expiration = 1wlistener.http.default = 127.0.0.1:8888Warning: Setting allow_anonymous=on completely disables authentication in the broker and plugin authentication hooks are never called! See more information about the authentication hooks here.
In a production setup we recommend to use the provided password based authentication mechanism or implement your own authentication plugins.
VerneMQ periodically checks the specified password file.
The check interval defaults to 10 seconds and can also be defined in the vernemq.conf.
Setting the password_reload_interval = 0 disables automatic reloading.
vmq-passwd is a tool for managing password files for the VerneMQ broker. Usernames must not contain ":", passwords are stored in similar format to crypt(3).
How to use vmq-passwd
Options
-c
Creates a new password file. If the file already exists, it will be overwritten.
-D
Deletes the specified user from the password file.
-U
This option can be used to upgrade/convert a password file with plain text passwords into one using hashed passwords. It will modify the specified file. It does not detect whether passwords are already hashed, so using it on a password file that already contains hashed passwords will generate new hashes based on the old hashes and render the password file unusable. Note, with this option neither usernames or passwords may contain
":".
passwordfile
The password file to modify.
username
The username to add/update/delete.
Examples
Add a user to a new password file: (you can choose an arbitrary name for the password file, it only has to match the configuration in the VerneMQ configuration file).
Delete a user from a password file
Acknowledgements
The original version of vmq-passwd was developed by Roger Light ([email protected]).
vmq-passwd includes :
software developed by the [OpenSSL
Project](http://www.openssl.org/) for use in the OpenSSL Toolkit.
cryptographic software written by Eric Young
software written by Tim Hudson ([email protected])
VerneMQ comes with a simple ACL based authorization mechanism which is enabled by default. If you don't need this it can be disabled by setting:
VerneMQ periodically checks the specified ACL file.
The check interval defaults to 10 seconds and can also be defined in the vernemq.conf.
Setting the acl_reload_interval = 0 disables automatic reloading.
Topic access is added with lines of the format:
The access type is controlled using read or write. If not provided then read an write access is granted for the topic. The topic can use the MQTT subscription wildcards + or #.
The first set of topics are applied to all anonymous clients (assuming allow_anonymous = on). User specific ACLs are added after a user line as follows (this is the username not the client id):
It is also possible to define ACLs based on pattern substitution within the the topic. The form is the same as for the topic keyword, but using pattern as the keyword.
The patterns available for substitution are:
%cto match the client id of the client
%uto match the username of the client
The substitution pattern must be the only text for that level of hierarchy. Pattern ACLs apply to all users even if the user keyword has previously been given.
Example:
VerneMQ currently doesn't cancel active subscriptions in case the ACL file revokes access for a topic.
Anonymous users are allowed to
publish & subscribe to topic bar.
publish to topic foo.
subscribe to topic all.
User john is allowed to
publish & subscribe to topic foo.
subscribe to topic baz.
publish to topic all.
To make a plugin start when VerneMQ starts they need to be configured in the main vernemq.conf file.
The general syntax to enable a plugin is to add a line like plugins.pluginname = on, using the vmq_passwd plugin as an example:
And if the plugin is external the path can be specified like this:
Plugin specific settings can be configured via myplugin.somesetting = value, like:
See the vernemq.conf file for details.
vmq-admin plugin showvmq-admin plugin enable --name=vmq_aclWarning: Setting allow_anonymous=on completely disables authentication in the broker and plugin authentication hooks are never called! See more information about the authentication hooks here.
This allows a newly started container to automatically join a VerneMQ cluster. Assuming you started your first node like the example above you could autojoin the cluster (which currently consists of a single container 'vernemq1') like the following:
(Note, you can find the IP of a docker container using docker inspect <CONTAINER_NAME> | grep \"IPAddress\").
To check if the above containers have successfully clustered you can issue the vmq-admin command:
docker run --name vernemq1 -d erlio/docker-vernemqvernemq.confThe feature and the interval can be changed at runtime using the vmq-admin script.
Usage: vmq-admin set = ... [[--node | -n] | --all]
Example: vmq-admin set systree_interval=60000 -n [email protected]
Examples:
systree_interval = 20000systree_enabled = offmosquitto_sub -t '$SYS/<node-name>/#' -u <username> -P <password> -ddpkg -s vernemq | grep Statusservice vernemq startwhereis vernemq
vernemq: /usr/sbin/vernemq /usr/lib/vernemq /etc/vernemq /usr/share/vernemqmax_message_size = 0plugins.vmq_passwd = offallow_anonymous = onvmq_passwd.password_file = /etc/vernemq/vmq.passwdvmq_passwd.password_reload_interval = 10vmq-passwd [-c | -D] passwordfile username
vmq-passwd -U passwordfilevmq-passwd -c /etc/vernemq/vmq.passwd henryvmq-passwd -D /etc/vernemq/vmq.passwd henryplugins.vmq_acl = offvmq_acl.acl_file = /etc/vernemq/vmq.aclvmq_acl.acl_reload_interval = 10topic [read|write] <topic>user <username>pattern [read|write] <topic>pattern write sensor/%u/data# ACL for anonymous clients
topic bar
topic write foo
topic read all
# ACL for user 'john'
user john
topic foo
topic read baz
topic write all+-----------+-----------+-----------------+-----------------------------+
| Plugin | Type | Hook(s) | M:F/A |
+-----------+-----------+-----------------+-----------------------------+
|vmq_passwd |application|auth_on_register |vmq_passwd:auth_on_register/5|
| vmq_acl |application| auth_on_publish | vmq_acl:auth_on_publish/6 |
| | |auth_on_subscribe| vmq_acl:auth_on_subscribe/3 |
+-----------+-----------+-----------------+-----------------------------+vmq-admin plugin disable --name=vmq_aclplugins.vmq_passwd = onplugins.myplugin = on
plugins.myplugin.path = /path/to/pluginvmq_passwd.password_file = ./etc/vmq.passwddocker run -p 1883:1883 --name vernemq1 -d erlio/docker-vernemqdocker run -e "DOCKER_VERNEMQ_ALLOW_ANONYMOUS=on" --name vernemq1 -d erlio/docker-vernemqdocker run -e "DOCKER_VERNEMQ_DISCOVERY_NODE=<IP-OF-VERNEMQ1>" --name vernemq2 -d erlio/docker-vernemqdocker exec vernemq1 vmq-admin cluster show
+--------------------+-------+
| Node |Running|
+--------------------+-------+
|[email protected]| true |
|[email protected]| true |
+--------------------+-------+VerneMQ supports multiple ways to configure one or many MQTT listeners.
Listeners specify on which IP address and port VerneMQ should accept new incoming connections. Depending on the chosen transport (TCP, SSL, WebSocket) different configuration parameters have to be provided. VerneMQ allows to write the listener configurations in a hierarchical manner, enabling very flexible setups. VerneMQ applies reasonable defaults on the top level, which can be of course overridden if needed.
These are the only default parameters that are applied for all transports, and the only one that are of interest for plain TCP and WebSocket listeners.
These global defaults can be overridden for a specific transport protocol listener.tcp.CONFIG = VAL, or even for a specific listener listener.tcp.LISTENER.CONFIG = VAL. The placeholder LISTENER is freely chosen and is only used as a reference for further configuring this particular listener.
Normally, an MQTT broker hosts one single topic tree. This means that all topics are accessible to all publishers and subscribers (limited by the ACLs you configured, of course). Mountpoints are a way to host multiple topic trees in a single broker. They are completely separated and clients with different topic trees cannot publish messages to each other. This could be useful if you provide MQTT services to multiple separated use cases/verticals or clients, with a single broker. Note that mountpoints are configured via different listeners. As a consequence, the MQTT clients will have to connect to a specific port to connect to a specific topic space (mountpoint).
The mountpoints can be configured on the protocol level or configurred or overridden on the specific listener level.
Since VerneMQ 1.5.0 it is possible to configure which MQTT protocol versions as listener will accept.
VerneMQ supports MQTT 3.1, 3.1.1, and 5.0 (since VerneMQ 1.6.0). To allow these protocol versions, set:
Here 3,4,5 are the protocol level versions corresponding to MQTT 3.1, 3.1.1 and 5.0 respectively. The default value is 3,4 thus allowing MQTT 3.1 and 3.1.1, while MQTT 5.0 is disabled.
Listen on TCP port 1883 and for WebSocket Connections on port 8888:
An additional listener can be added by using a different name. In the example above the name equals to default and can be used for further configuring this particular listener. The following example demonstrates how an additional listener is defined as well as how the maximum number of connections can be limited for this listener:
VerneMQ listeners can be configured to accept connections from a proxy server that supports the PROXY protocol. This enables VerneMQ to retrieve peer information such as source IP/Port but also PROXY Version 2 protocol TLS client certificate details if the proxy was used to terminate TLS.
To enable the PROXY protocol for tcp listeners use listener.tcp.proxy_protocol = on or for a specific listener use listener.tcp.LISTENER.proxy_protocol = on.
If client certificates are used you can set listener.tcp.proxy_protocol_use_cn_as_username = on which will overwrite the MQTT username set by the client with the common name from the client certificate before authentication and authorization is performed.
Accepting SSL connections on port 8883:
If you want to use client certificates to authenticate your clients you have to set the following option:
If you use client certificates and want to use the certificates CN value as a username you can set:
Both options require_certificate and use_identity_as_username default to off.
The same configuration options can be used for securing WebSocket connections, just use wss as the protocol identifier e.g. listener.wss.require_certificate.
VerneMQ can be easily clustered. Clients can then connect to any cluster node and receive messages from any other cluster nodes. However, the MQTT specification gives certain guarantees that are hard to fulfill in a distributed environment, especially when network partitions occur. We'll discuss the way VerneMQ deals with network partitions in its own subsection
Set the Cookie! All cluster nodes need to be configured to use the same Cookie value. It can be set in the vernemq.conf with the distributed_cookie setting. Set the Cookie to a private value for security reasons!
Before you go ahead and experience the full power of clustering VerneMQ, be aware of its stateful character. An MQTT broker is a stateful application and a VerneMQ cluster is a stateful cluster.
What does this mean in detail? It means that clustered VerneMQ nodes will share information about connected clients and sessions but also meta-information about the cluster itself.
For instance, if you stop a cluster node, the VerneMQ cluster will not just forget about it. It will know that there's a node missing and it will keep looking for it. It will know there's a netsplit situation and it will heal the partition when the node comes back up. But if the missing nodes never comes back there's an eternal netsplit. (still resolvable by making the missing nodes explicitly leave).
This doesn't mean that a VerneMQ cluster cannot dynamically grow and shrink. But it means you have to tell the cluster what you intend to do, by using join and leave commands.
If you want a cluster node to leave the cluster, well... use the vmq-admin cluster leave command. If you want a node to join a cluster, well... use the vmq-admin cluster join command.
Makes sense? Go ahead and create your first VerneMQ cluster!
A cluster leave will actually do a lot more work, and gives you some options to choose. The node leaving the cluster will go to great length trying to migrate its existing queues to other nodes. As queues (online or offline) are live processes in a VerneMQ node, it will only exit after it has migrated them.
Let's look at the steps in detail:
vmq-admin cluster leave node=<NodeThatShouldGo>
This first step will only stop the MQTT Listeners of the node to ensure that no new connections are accepted. It will not interrupt the existing connections, and behind the scenes the node will not leave the cluster yet. Existing clients are still able to publish and receive messages at this point.
The idea is to give a grace period with the hope that existing clients might re-connect (to another node). If you have decided that this period is over (after 5 minutes or 1 day is up to you), you proceed with step 2: disconnecting the rest of the clients.
vmq-admin cluster leave node=<NodeThatShouldGo> -k
The -k flag will delete the MQTT Listeners of the leaving node, taking down all live connections. If this is what you want from the beginning, you can do this right away as a first step.
Now, queue migration is triggered by clients re-connecting to other nodes. They will claim their queue and it will get migrated. Still, there might be some offline queues remaining on the leaving node, because they were pre-existing or because some clients do not re-connect and do not reclaim their queues.
VerneMQ will throw an exception if there are remaining offline queues after a configurable timeout. The default is 60 seconds, but you can set it as an option to the cluster leave command. As soon as the exception shows in console or console.log, you can actually retry the cluster leave command (including setting a migration timeout (-t), and an interval in seconds (-i) indicating how often information on the migration progress should be printed to the console.log):
vmq-admin cluster leave node=<NodeThatShouldGo> -k -i 5 -t 120
After this timeout VerneMQ will forcefully migrate the remaining offline queues to other cluster nodes in a round robin manner. After doing that, it will stop the leaving VerneMQ node.
So, case A was the happy case. You left the cluster with your node in a controlled manner, and everything worked, including a complete queue (and message) transfer to other nodes.
Let's look at the second possibility where the node is already down. Your cluster is still counting on it though and possibly blocking new subscription for that reason, so you want to make the node leave.
To do this, use the same command(s) as in the first case. There is one important consequence to note: by making a stopped node leave, you basically throw away persistant queue content, as VerneMQ won't be able to migrate or deliver it.
Let's repeat that to make sure:
Case B: Currently the persisted QoS 1 & QoS 2 messages aren't replicated to the other nodes by the default message store backend. Currently you will lose the offline messages stored on the leaving node.
Where should VerneMQ emit the default console log messages (which are typically at info severity):
log.console = off | file | console | bothVerneMQ defaults to log the console messages to a file, which can specified by:
This option defaults to /var/log/vernemq/console.log for Ubuntu, Debian, RHEL and Docker installs.
The default console logging level info could be setting one of the following:
VerneMQ log error messages by default. One can change the default behaviour by setting:
VerneMQ defaults to log the error messages to a file, which can specified by:
This option defaults to /var/log/vernemq/error.log for Ubuntu, Debian, RHEL and Docker installs.
VerneMQ log crash messages by default. One can change the default behaviour by setting:
VerneMQ defaults to log the crash messages to a file, which can specified by:
This option defaults to /var/log/vernemq/crash.log for Ubuntu, Debian, RHEL and Docker installs.
The maximum sizes in bytes of inidividual messages in the crash log defaults to 64KB but can be specified by:
VerneMQ rotate crash logs. By default, the crash log file is rotated at midnight or when the size exceeds 10MGB. This behaviour can be changed by setting:
The default number of rotated log files is 5 and can be set with the option:
VerneMQ supports logging to SysLog, enable it by setting:
Logging to SysLog is disabled by default.
In this section the subscription flow is described. VerneMQ provides several hooks to intercept the subscription flow. The most important ones are the auth_on_subscribe and auth_on_subscribe_m5 hooks which act as an application level firewall granting or rejecting subscribe requests.
The auth_on_subscribe and auth_on_subscribe_m5 hooks allow your plugin to grant or reject subscribe requests sent by a client. They also makes it possible to rewrite the subscribe topic and qos. The auth_on_subscribe hook is specified in the Erlang behaviour and the auth_on_subscribe hook in the behaviour available in the repo.
The on_subscribe and on_subscribe_m5 hooks allow your plugin to get informed about an authorized subscribe request. The on_subscribe hook is specified in the Erlang behaviour and the on_subscribe_m5 hook in the behaviour available in the repo.
The on_unsubscribe and on_unsubscribe_m5 hooks allow your plugin to get informed about an unsubscribe request. They also allow you to rewrite the unsubscribe topic if required. The on_subscribe hook is specified in the Erlang behaviour and the on_unsubscribe_m5 hook in the behaviour available in the repo.
VerneMQ uses Google's LevelDB as a fast storage backend for messages and subscriber information. Each VerneMQ node runs its own embedded LevelDB store.
There's not much you need to know about LevelDB and VerneMQ. One really important thing to note is that LevelDB manages its own memory. This means that VerneMQ will not allocate and free memory for LevelDB. Instead you'll have to configure a configuration value in vernemq.conf that tells LevelDB how much memory it can use up.
Configuring LevelDB memory:
The VerneMQ status page
VerneMQ comes with a built-in status page which by default is enabled and is available on http://localhost:8888/status, see .
The status page is a simple overview of the cluster and the individual nodes in the cluster as seen below:
The VerneMQ health checker
A simple way to gauge the health of a VerneMQ cluster is to query the /health path on the .
The health check will return 200 when VerneMQ is accepting connections and is joined with the cluster (for clustered setups). 503 will be returned in case any of those two conditions are not met.
# defines the default nr of allowed concurrent
# connections per listener
listener.max_connections = 10000
# defines the nr. of acceptor processes waiting
# to concurrently accept new connections
listener.nr_of_acceptors = 10
# used when clients of a particular listener should
# be isolated from clients connected to another
# listener.
listener.mountpoint = offlog.console.file = /path/to/log/fileOn every VerneMQ node you'll find the vmq-admin command line tool in the release's bin directory. It has different sub-commands that let you check for status, start and stop listeners, re-configure values and a couple of other administrative tasks.
vmq-admin is a live re-configuration utility. Please note that all dynamically configured values will be reset by vernemq.conf upon broker restart.
Don't use this to wildly re-configure a production system without keeping track what you are doing. vmq-admin gives you the flexibility to test stuff and react live, but please persistent any static configuration you need in the vernemq.conf file.
log.console.level = debug | info | warning | errorLevelDB means business with its allocated memory. It will eventually end up with the configured max, making it look like there's a memory leak, or even triggering OOM kills. Keep that in mind when configuring the percentage of RAM you give to LevelDB.
leveldb.maximum_memory.percent = 20VerneMQ uses the Erlang distribution mechanism for most inter-node communication. VerneMQ identifies other machines in the cluster using Erlang identifiers (e.g. [email protected]). Erlang resolves these node identifiers to a TCP port on a given machine via the Erlang Port Mapper daemon (epmd) running on each cluster node.
By default, epmd binds to TCP port 4369 and listens on the wildcard interface. For inter-node communication, Erlang uses an unpredictable port by default; it binds to port 0, which means the first available port.
For ease of firewall configuration, VerneMQ can be configured to instruct the Erlang interpreter to use a limited range of ports. For example, to restrict the range of ports that Erlang will use for inter-Erlang node communication to 6000-7999, add the following lines to vernemq.conf on each VerneMQ node:
erlang.distribution.port_range.minimum = 6000
erlang.distribution.port_range.maximum = 7999The settings above are only used for distributing subscription updates and maintenance messages. For distributing the 'real' MQTT messages the proper vmq listener must be configured in the vernemq.conf.
listener.vmq.clustering = 0.0.0.0:44053Attributions:
This section, "VerneMQ Inter-node Communication", is a derivative of Security and Firewalls by Riak, used under Creative Commons Attribution 3.0 Unported License.
listener.ssl.mountpoint = ssl-mountpoint
listener.tcp.listener1.mountpoint = tcp-listener1
listener.tcp.listener2.mountpoint = tcp-listener2listener.tcp.allowed_protocol_versions = 3,4,5listener.tcp.default = 127.0.0.1:1883
listener.ws.default = 127.0.0.1:8888listener.tcp.my_other = 127.0.0.1:18884
listener.tcp.my_other.max_connections = 100listener.ssl.cafile = /etc/ssl/cacerts.pem
listener.ssl.certfile = /etc/ssl/cert.pem
listener.ssl.keyfile = /etc/ssl/key.pem
listener.ssl.default = 127.0.0.1:8883listener.ssl.require_certificate = onlistener.ssl.use_identity_as_username = onvmq-admin cluster join discovery-node=<OtherClusterNode>vmq-admin cluster leave node=<NodeThatShouldGo> (only the first step!)vmq-admin cluster showlog.error = on | offlog.error.file = /path/to/log/filelog.crash = on | offlog.crash.file = /path/to/log/filelog.crash.maximum_message_size = 64KB## Acceptable values:
## - a byte size with units, e.g. 10GB
log.crash.size = 10MB
## For acceptable values see https://github.com/basho/lager/blob/master/README.md#internal-log-rotation
log.crash.rotation = $D0log.crash.rotation.keep = 5log.syslog = onYou can dynamically re-configure most of VerneMQ's settings on a running node by using the vmq-admin set command.
The following config values can be handled dynamically:
Settings dynamically configured with the vmq-admin set command will be reset by vernemq.conf upon broker restart.
Let's change the max_client_id_size as an example. (We might have noticed that some clients can't login because their client ID is too long, but we don't want to restart the broker for that). Note that you can also set multiple values with the same command.
You can show one or multiple values in a simple table:
This section elaborates how a VerneMQ cluster deals with network partitions (aka. netsplit or split brain situation). A netsplit is mostly the result of a failure of one or more network devices resulting in a cluster where nodes can no longer reach each other.
VerneMQ is able to detect a network partition, and by default it will stop serving CONNECT, PUBLISH, SUBSCRIBE, and UNSUBSCRIBE requests. A properly implemented client will always resend unacked commands and messages are therefore not lost (QoS 0 publishes will be lost). However, the time window between the network partition and the time VerneMQ detects the partition much can happen. Moreover, this time frame will be different on every participating cluster node. In this guide we're referring to this time frame as the Window of Uncertainty.
The behaviour during a netsplit is completely configurable via allow_register_during_netsplit, allow_publish_during_netsplit, allow_subscribe_during_netsplit, and allow_unsubscribe_during_netsplit. These options supersede the trade_consistency option. In order to reach the same behaviour as trade_consistency = on all the mentioned netsplit options have to set to on.
VerneMQ follows an eventually consistent model for storing and replicating the subscription data. This also includes retained messages.
Due to the eventually consistent data model it is possible that during the Window of Uncertainty a publish won't take into account a subscription made on a remote node (in another partition). Obviously, VerneMQ can't deliver the message in this case. The same holds for delivering retained messages to remote subscribers.
last will messages that are triggered during the Window of Uncertainty will be delivered to the reachable subscribers. Currently during a netsplit, but after the Window of Uncertainty last will messages will be lost.
Normally, client registration is synchronized using an elected leader node for the given client id. Such a synchronization removes the race condition between multiple clients trying to connect with the same client id on different nodes. However, during the Window of Uncertainty it is currently possible that VerneMQ fails to disconnect a client connected to a different node. Although this scenario sounds like artificially crafted it is possible to end up with duplicate clients connected to the cluster.
As soon as the partition is healed, and connectivity reestablished, the VerneMQ nodes replicate the latest changes made to the subscription data. This includes all the changes 'accidentally' made during the Window of Uncertainty. Using VerneMQ ensures that convergence regarding subscription data and retained messages is eventually reached.
Inspecting the retained message store
To list the retained messages simply invoke vmq-admin retain show:
$ vmq-admin retain show
+------------------+----------------+
| payload | topic |
+------------------+----------------+
| a-third-message | a/third/topic |
|some-other-message|some/other/topic|
| a-message | some/topic |
| a-message | another/topic |
+------------------+----------------+Note, by default a maximum of 100 results are returned. This is a mechanism to protect the from overload as there can be millions of retained messages. Use --limit=<RowLimit> to override the default value.
Besides listing the retained messages it is also possible to filter them:
$ vmq-admin retain show --payload --topic=some/topic
+---------+
| payload |
+---------+
|a-message|
+---------+In the above example we list only the payload for the topic some/topic.
Another example where all topics are list with retained messages with a specific payload:
See the full set of options and documentation by invoking vmq-admin retain show --help.
Working with shared subscriptions
A shared subscription is a mechanism for distributing messages to a set of subscribers to shared subscription topic, such that each message is received by only one subscriber. This contrasts with normal subscriptions where each subscriber will receive a copy of the published message.
A shared subscription is on the form $share/sharename/topic and subscribers to this topic will receive messages published to the topic topic. The messages will be distributed according to the defined distribution policy.
When subscribing to a shared subscription using command line tools remember to quote the topic as some command line shells, like bash
When working with a system like VerneMQ sometimes when troubleshooting it would be nice to know what a client is actually sending and receiving and what VerneMQ is doing with this information. For this purpose VerneMQ has a built-in tracing mechanism which is safe to use in production settings as there is very little overhead in running the tracer and has built-in protection mechanisms to stop traces that produce too much information.
In this section the publish flow is described. VerneMQ provides multiple hooks throughout the flow of a message. The most important ones are the auth_on_publish and auth_on_publish_m5 hooks which acts as an application level firewall granting or rejecting a publish message.
The auth_on_publish
Description and Configuration of the Prometheus exporter
You can configure as many listeners as you wish in the vernemq.conf file. In addition to this, the vmq-admin listener command let's you configure, start, stop and delete listeners on the fly. Those can be MQTT, WebSocket or Cluster listeners, in the command line output they will be tagged mqtt, ws or vmq accordingly.
The graphite exporter reports the broker metrics at a fixed interval (defined in milliseconds) to a graphite server. The necessary configuration is done inside the vernemq.conf.
You can further tune the connection to the Graphite server:
There are a couple of hidden options you can set in the vernemq.conf file. Hidden means that you have to add and set the value explicitly. Hidden options still have default values. Changing them should be considered advanced, possibly with the exception of setting a max_message_rate.
Specify how the queue should deliver messages when multiple sessions are allowed. In case of fanout all the attached sessions will receive the message, in case of balance
We recommend to use the rebar3 toolchain to generate the basic Erlang OTP application boilerplate and start from there.
Change the rebar.config file to include the vernemq_dev dependency:
Compile the application, this will automatically fetch vernemq_dev.
Now you're ready to implement the hooks. Don't forget to add the proper vmq_plugin_hooks entries to your src/myplugin.app.src file.
For a complete example, see the
You can loadtest VerneMQ with our . It is based on Machinezone's very powerful and lets you narrow down what hardware specs are needed to meet your performance goals. You can state your requirements for latency percentiles (and much more) in a formal way, and let vmq_mzbench automatically fail, if it can't meet the requirements.
If you have an AWS account, vmq_mzbench can automagically provision worker nodes for you. You can also run it locally, of course.
Please follow the
VerneMQ supports flows or SASL style authentication for MQTT 5.0 sessions. The enhanced authentication mechanism can be used for initial authentication when the client connects or to re-authenticate clients at a later point.
The on_auth_m5 hook allows the plugin to implement SASL style authentication flows by either accepting, rejecting (disconnecting the client) or continue the flow. The on_auth_m5 hook is specified in the Erlang behaviour in the repo.
Description and Configuration of the built-in Monitoring mechanism
VerneMQ can be monitored in several ways. We implemented native support for , , and .
The metrics are also available via the command line tool:
Or with:
Which will output the metrics together with a short description describing what the metric is about. An example looks like:
Notice that the metrics:
Are no longer used (always 0) and will be removed in the future. They were replaced with mqtt_connack_sent using the return_code label. For MQTT 5.0 the reason_code label is used instead.
allow_anonymous
topic_alias_max_broker
receive_max_broker
vmq_acl.acl_file
graphite_host
vmq_acl.acl_reload_interval
graphite_enabled
queue_type
suppress_lwt_on_session_takeover
max_message_size
vmq_passwd.password_file
graphite_port
max_client_id_size
upgrade_outgoing_qos
max_message_rate
graphite_interval
allow_multiple_sessions
systree_enabled
max_last_will_delay
retry_interval
receive_max_client
max_offline_messages
max_online_messages
max_inflight_messages
allow_register_during_netsplit
vmq_passwd.password_reload_interval
topic_alias_max_client
systree_interval
allow_publish_during_netsplit
coordinate_registrations
remote_enqueue_timeout
persistent_client_expiration
allow_unsubscribe_during_netsplit
graphite_include_labels
shared_subscription_policy
queue_deliver_mode
allow_subscribe_during_netsplit$ vmq-admin retain show --payload a-message --topic
+-------------+
| topic |
+-------------+
| some/topic |
|another/topic|
+-------------+Please follow the documentation on the Prometheus website to properly configure the metrics scraping as well as how to access those metrics and configure alarms and graphs.
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
- job_name: 'vernemq'
scrape_interval: 5s
scrape_timeout: 5s
static_configs:
- targets: ['localhost:8888']graphite_enabled = on
graphite_host = carbon.hostedgraphite.com
graphite_port = 2003
graphite_interval = 20000
graphite_api_key = YOUR-GRAPHITE-API-KEY# set the connect timeout (defaults to 5000 ms)
graphite_connect_timeout = 10000
# set a reconnect timeout (default to 15000 ms)
graphite_reconnect_timeout = 10000
# set a custom graphite prefix (defaults to '')
graphite_prefix = vernemqrebar3 new app name="myplugin" desc="this is my first VerneMQ plugin"
===> Writing myplugin/src/myplugin_app.erl
===> Writing myplugin/src/myplugin_sup.erl
===> Writing myplugin/src/myplugin.app.src
===> Writing myplugin/rebar.config
===> Writing myplugin/.gitignore
===> Writing myplugin/LICENSE
===> Writing myplugin/README.md{erl_opts, [debug_info]}.
{deps, [{vernemq_dev,
{git, "git://github.com/vernemq/vernemq_dev.git", {branch, "master"}}}
]}.rebar3 compile
===> Verifying dependencies...
===> Fetching vmq_commons ({git,
"git://github.com/vernemq/vernemq_dev.git",
{branch,"master"}})
===> Compiling vernemq_dev
===> Compiling mypluginTo trace a client the following command is available:
See the available flags by calling vmq-admin trace client --help.
A typical trace could look like the following:
In this particular trace a trace was started for the client with client-id client. At first no clients are connected to the node where the trace has been started, but a little later the client connects and we see the trace come alive. The strange identifier <7616.3443.1> is called a process identifier and is the identifier of the process in which the trace happened - this isn't relevant unless one wants to correlate the trace with log entries where process identifiers are also logged. Besides the process identifier there are some lines with MQTT SEND and MQTT RECV which are to be understood from the perspective of the broker. In the above trace this means that first the broker receives a CONNECT frame and replies with a CONNACK frame. Each MQTT event is annotated with the data from the MQTT frame to give as much detail and insight as possible.
Notice the auth_on_register call between CONNECT and CONNACK which is the authentication plugin hook being called to authenticate the client. In this case the hook returned ok which means the client was successfully authenticated.
Likewise notice the auth_on_subscribe call between the SUBSCRIBE and SUBACK frames which is plugin hook used to authorize if this particular subscription should be allowed or not. In this case the subscription was authorized.
vmq-admin trace client client-id=<client-id>vmq-admin listener command will not survive a broker restart. Live changes to listeners configured in vernemq.conf are possible, but the vernemq.conf listeners will just be restarted with a broker restart.This will start an MQTT listener on port 1884 and IP address 192.168.1.50. If you want to start a WebSocket listener, just tell VerneMQ by adding the --websocket flag. There are more options, mainly for configuring SSL (use vmq-admin listener start --help).
You can isolate client connections accepted by a certain listener from other clients by setting a mountpoint.
To start an MQTT listener using defaults, just set the port and IP address as a minimum.
You can add the -k or --kill_sessions switch to that command. This will disconnect all client connections setup by that listener. In combination with a mountpoint, this can be useful for terminating clients for a specific application, or to force re-connects to another cluster node (to prepare for a cluster leave for your node).
Specify how queues should process messages, either the fifo or lifo way. Default is fifo.
Specifies the maximum incoming publish rate per session per second. Depending on the underlying network buffers this rate isn't enforced. Defaults to 0, which means no rate limits apply. Setting to a value of 2 limits any publisher to 2 messages per second, for instance.
Due to the eventual consistent nature of the subscriber store it is possible that during queue migration messages still arrive on the old cluster node. This parameter enables to compensate this by keeping the queue around for some time (in seconds) after it was migrated to the other cluster node.
Specifies the number of messages that are delivered to the remote node per drain step. A large value will provide a faster migration of a queue, but increases the waste of bandwidth in case the migration fails.
Allows to select a new default reg_view. A reg_view is a pre-defined way to route messages. Multiple views can be loaded and used, but one has to be selected as a default. The default routing is vmq_reg_trie, i.e. routing via the built-in trie data structure.
A list of views that are started during startup. It's only used in plugins that want to choose dynamically between routing reg_views.
An integer specifying how many bytes are buffered in case the remote node is not available. Default is 10000
Actually, you don't even have to install vmq_mzbench, if you don't want to. Your scenario file will automatically fetch vmq_mzbench for any test you do. vmq_mzbench runs every test independently, so it has a provisioning step for any test, even if you only run it on a local worker.
To install vmq_mzbench on your computer, go through the following steps:
To provision your tests from this local repository, you'll have to tell the scenario scripts to use rsync. Add this to the scenario file:
If you'd just like the script itself fetch vmq_mzbench, then you can direct it to github:
You can familiarize yourself quickly with MZBench's guide on writing loadtest scenarios.
There's not much to learn, just make sure you understand how pools and loops work. Then you can add the vmq_mzbench statement functions to the mix and define actual loadtest scenarios.
Currently vmq_mzbench exposes the following statement functions for use in MQTT scenario files:
random_client_id(State, Meta, I): Create a random client Id of length I
fixed_client_id(State, Meta, Name, Id): Create a deterministic client Id with schema Name ++ "-" ++ Id
worker_id(State, Meta): Get the internal, sequential worker Id
client(State, Meta): Get the client Id you set yourself during connection setup with the option {t, client, "client"}
connect(State, Meta, ConnectOpts): Connect to the broker with the options given in ConnectOpts
disconnect(State, Meta): Disconnect normally
subscribe(State, Meta, Topic, QoS): Subscribe to Topic with Quality of Service QoS
unsubscribe(State, Meta, Topic): Unubscribe from Topic
publish(State, Meta, Topic, Payload, QoS): Publish a message with binary Payload to Topic with QoS
publish(State, Meta, Topic, Payload, QoS, RetainFlag): Publish a message with binary Payload to Topic with QoS and RetainFlag
It's easy to add more statement functions to the MQTT worker if needed, get in touch with us.
not_authorized label:All available labels can be show using vmq-admin metrics show --help.
vmq-admin metrics showvmq-admin metrics show -d# The number of AUTH packets received.
counter.mqtt_auth_received = 0
# The number of times a MQTT queue process has been initialized from offline storage.
counter.queue_initialized_from_storage = 0
# The number of PUBLISH packets sent.
counter.mqtt_publish_sent = 10
# The number of bytes used for storing retained messages.
gauge.retain_memory = 21184vmq-admin set max_client_id_size=45vmq-admin set max_client_id_size=45 [email protected]vmq-admin set max_client_id_size=45 --allvmq-admin show max_client_id_size retry_interval+----------------------+------------------+--------------+
| node |max_client_id_size|retry_interval|
+----------------------+------------------+--------------+
|[email protected]| 28 | 20 |
+----------------------+------------------+--------------+
`vmq-admin show max_client_id_size retry_interval --node [email protected]vmq-admin show max_client_id_size retry_interval --all+----------------------+------------------+--------------+
| node |max_client_id_size|retry_interval|
+----------------------+------------------+--------------+
|[email protected]| 33 | 20 |
|[email protected]| 33 | 20 |
|[email protected]| 33 | 20 |
|[email protected]| 33 | 20 |
|[email protected]| 28 | 20 |
+----------------------+------------------+--------------+$ vmq-admin trace client client-id=client
No sessions found for client "client"
New session with PID <7616.3443.1> found for client "client"
<7616.3443.1> MQTT RECV: CID: "client" CONNECT(c: client, v: 4, u: username, p: password, cs: 1, ka: 30)
<7616.3443.1> Calling auth_on_register({{172,17,0,1},34274},{[],<<"client">>},username,password,true)
<7616.3443.1> Hook returned "ok"
<7616.3443.1> MQTT SEND: CID: "client" CONNACK(sp: 0, rc: 0)
<7616.3443.1> MQTT RECV: CID: "client" SUBSCRIBE(m1) with topics:
q:0, t: "topic"
<7616.3443.1> Calling auth_on_subscribe(username,{[],<<"client">>}) with topics:
q:0, t: "topic"
<7616.3443.1> Hook returned "ok"
<7616.3443.1> MQTT SEND: CID: "client" SUBACK(m1, qt[0])
<7616.3443.1> Trace session for client stoppedvmq-admin listener show +----+-------+------------+-----+----------+---------+
|type|status | ip |port |mountpoint|max_conns|
+----+-------+------------+-----+----------+---------+
|vmq |running|192.168.1.50|44053| | 30000 |
|mqtt|running|192.168.1.50|1883 | | 30000 |
+----+-------+------------+-----+----------+---------+
`vmq-admin listener start address=192.168.1.50 port=1884 --mountpoint /test --nr_of_acceptors=10 --max_connections=1000vmq-admin listener stop address=192.168.1.50 port=1884vmq-admin listener restart address=192.168.1.50 port=1884vmq-admin listener delete address=192.168.1.50 port=1884queue_deliver_mode = balancequeue_type = fifomax_message_rate = 2max_drain_time = 20max_msgs_per_drain_step = 1000vmq_reg_view = "vmq_reg_trie"reg_views = "[vmq_reg_trie]"outgoing_clustering_buffer_size = 15000git clone git://github.com/erlio/vmq_mzbench.git
cd vmq_mzbench
./rebar get-deps
./rebar compile{make_install, [
{rsync, "/path/to/your/installation/vmq_mzbench/"},
{exclude, "deps"}]},{make_install, [
{git, "git://github.com/erlio/vmq_mzbench.git"}]},mqtt_connack_not_authorized_sent
mqtt_connack_bad_credentials_sent
mqtt_connack_server_unavailable_sent
mqtt_connack_identifier_rejected_sent
mqtt_connack_unacceptable_protocol_sent
mqtt_connack_accepted_sentvmq-admin metrics show --return_code=not_authorized
counter.mqtt_connack_sent = 0$shareCurrently three message distribution policies for shared subscriptions are supported: prefer_local, random and local_only. Under the random policy messages will be published to a random member of the shared subscription, if any exist. Under the prefer_local policy messages will be delivered to a random node-local member of the shared subscription, if none exist, the message will be delivered to a random member of the shared subscription on a remote cluster node. Under the local_only policy message will be delivered to a random node-local member of the shared subscription.
When a messages is being delivered to subscribers of a shared subscription, the message will be delivered to an online subscriber if possible, otherwise the message will be delivered to an offline subscriber. Notice that the shared subscription policy is applied before considering online or offline status of clients.
Subscriptions Note: When subscribing to a shared topic, make sure to escape the $
So, for dash or bash shells
Publishing Note: When publishing to a shared topic, do not include the prefix $share/group/ as part of the publish topic name
auth_on_publish_m5auth_on_publish_m5auth_on_publishauth_on_publish_m5 hook in the Every plugin that implements the auth_on_publish or auth_on_publish_m5 hooks are part of a conditional plugin chain. For this reason we allow the hook to return different values. In case the plugin can't validate the publish message it is best to return next as this would allow subsequent plugins in the chain to validate the request. If no plugin is able to validate the request it gets automatically rejected.
The on_publish and on_publish_m5 hooks allow your plugin to get informed about an authorized publish message. The hook is specified in the Erlang behaviour on_publish_hook and the on_publish_m5 hook in the on_publish_m5_hook behaviour available in the vernemq_dev repo.
The on_offline_message hook allows your plugin to get notified about a new a queued message for a client that is currently offline. The hook is specified in the Erlang behaviour on_offline_message_hook available in the vernemq_dev repo.
The on_deliver and on_deliver_m5 hooks allow your plugin to get informed about outgoing publish messages, but also allows you to rewrite topic and payload of the outgoing message. The hook is specified in the Erlang behaviour on_deliver_hook and the on_deliver_m5 hook in the on_deliver_m5_hook behaviour available in the vernemq_dev repo.
Every plugin that implements the on_deliver or on_deliver_m5 hooks are part of a conditional plugin chain, although NO verdict is required in this case. The message gets delivered in any case. If your plugin uses this hook to rewrite the message the plugin system stops evaluating subsequent plugins in the chain.
VerneMQ provides multiple hooks throughout the lifetime of a session. The most important ones are the auth_on_register and auth_on_register_m5 hooks which act as an application level firewall granting or rejecting new clients.
The auth_on_register and auth_on_register_m5 hooks allow your plugin to grant or reject new client connections. Moreover it lets you exert fine grained control over the configuration of the client session. The auth_on_register hook is specified in the Erlang behaviour and the auth_on_register_m5 hook in the behaviour available in the repo.
Every plugin that implements the auth_on_register or auth_on_register_m5 hooks are part of a conditional plugin chain. For this reason we allow the hook to return different values depending on how the plugin grants or rejects this client. In case the plugin doesn't know the client it is best to return next as this would allow subsequent plugins in the chain to validate this client. If no plugin is able to validate the client it gets automatically rejected.
The on_auth_m5 hook allows your plugin to implement MQTT enhanced authentication, see .
The on_register and on_register_m5 hooks allow your plugin to get informed about a newly authenticated client. The hook is specified in the Erlang behaviour and the behaviour available in the repo.
Once a new client was successfully authenticated and the above described hooks have been called, the client attaches to its queue. If it is a returning client using clean_session=false or if the client had previous sessions in the cluster, this process could take a while. (As offline messages are migrated to a new node, existing sessions are disconnected). The hook is called at the point where a queue has been successfully instantiated, possible offline messages migrated, and potential duplicate sessions have been disconnected. In other words: when the client has reached a completely initialized, normal state for accepting messages. The hook is specified in the Erlang behaviour on_client_wakeup_hook available in the repo.
This hook is called if an MQTT 3.1/3.1.1 client using clean_session=false or an MQTT 5.0 client with a non-zero session_expiry_interval closes the connection or gets disconnected by a duplicate client. The hook is specified in the Erlang behaviour available in the repo.
This hook is called if an MQTT 3.1/3.1.1 client using clean_session=true or an MQTT 5.0 client with the session_expiry_interval set to zero closes the connection or gets disconnected by a duplicate client. The hook is specified in the Erlang behaviour available in the repo.
This describes a quick way to create a VerneMQ cluster on developer's machines
Sometimes you want to have a quick way to test a cluster on your development machine as a VerneMQ developer.
You need to take care of a couple things if you want to run multiple VerneMQ instances on the same machine. There is a make option that let's you build multiple releases, as a commodity, taking care of all the configuration.
First, build a normal release (this is just needed the first time) with:
➜ default git:(master) ✗ make rel
The following command will then prepare 3 correctly configured vernemq.conf files, with different ports for the MQTT listeners etc. It will also build 3 full VerneMQ releases.
➜ default git:(master) ✗ make dev1 dev2 dev3
Check if you have the 3 new releases in the _build directory of your VerneMQ code repo.
You can then start the respective broker instances in 3 terminal windows, by using the respective commands and directory paths. Example:
➜ (_build/dev2/rel/vernemq/bin) ✗ vernemq console
The MQTT listeners will of course be configured differently for each node (the default 1883 port is not used, so that you can still run a default MQTT broker besides your dev nodes). A couple of other ports are also adapted (HTTP status page, cluster communication). The MQTT ports are automically configured in increasing steps of 50: (if in doubt, consult the respective vernemq.conf files)
Note that the dev nodes are not automatically clustered. You still need to manually cluster them with commands like the following:
➜ (_build/dev2/rel/vernemq/bin) ✗ vmq-admin cluster join [email protected]
You need to know about and configure a couple of Operating System and Erlang VM configs to operate VerneMQ efficiently. First, make sure you have set appropriate OS file limits according to our . Second, when you run into performance problems, don't forget to check the . (Can't open more than 10k connections? Well, is the listener configured to open more than 10k?)
shared_subscription_policy = prefer_localmosquitto_sub -h mqtt.example.io -p 1883 -q 2 -t \$share/group/topicname
mosquitto_sub -h mqtt.example.io -p 1883 -q 2 -t \$share/group/topicname/#mosquitto_pub -h mqtt.example.io -p 1883 -t topicname -m "This is a test message"
mosquitto_pub -h mqtt.example.io -p 1883 -t topicname/group1 -m "This is a test message"Node
MQTT listener port
10053
10103
10153
...
...
This is the number one topic to look at, if you need to keep an eye on RAM usage.
Context: All network I/O in Erlang uses an internal driver. This driver will allocate and handle an internal application side buffer for every TCP connection. The default size of these buffers will determine your overall RAM use in VerneMQ. The sndbuf and recbuf of the TCP socket will not count towards VerneMQ RAM, but will be used by the Linux Kernel.
VerneMQ calculates the buffer size from the OS level TCP send and receive buffers:
val(buffer) >= max(val(sndbuf),val(recbuf))
Those values correspond to net.ipv4.tcp_wmem and net.ipv4.tcp_rmem in your OS's sysctl configuration. One way to minimize RAM usage is therefore to configure those settings (Debian example):
This would result in a 32KB application buffer for every connection.
If your VerneMQ use case requires the use of different TCP buffer optimisations (per groups of clients for instance) you will have to make sure the that the Linux OS buffer configuration, namely net.ipv4.tcp_wmem and net.ipv4.tcp_rmemallows for this kind of flexibility, allowing for small TCP buffers and big TCP buffers at the same time.
Example 1 above would allow VerneMQ to allocate minimal TCP read and write buffers of 4KB in the Linux Kernel, a max read buffer of 32KB in the kernel, and a max write buffer of 65KB in the kernel. VerneMQ itself would set its own internal per connection buffer to 65KB in addition.