Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Welcome to the VerneMQ documentation! This is a reference guide for most of the available features and options of VerneMQ. The Getting Started guide might be a good entry point.
For a more general overview on VerneMQ and MQTT, you might want to start with the introduction.
For downloading the subscription-based binary VerneMQ packages and/or a quick description on how to compile VerneMQ from sources, see Downloads.
The VerneMQ Documentation project is an open-source effort, and your contributions are very welcome and appreciated. You can contribute on all levels:
Language, style and typos
Fixing obvious documentation errors and gaps
Providing more details and/or examples for specific topics
Extending the documentation where you find this useful to do
Note that the documentation is versioned according to the VerneMQ releases. You can click the "Edit on Github" button in the upper right corner of every page to check what branch and document you are on. You can then create a Pull Request (PR) against that branch from your fork of the VerneMQ documentation repository. (Direct edits on Github are possible for members of the documentation repository).
VerneMQ comes with a simple file-based password authentication mechanism which is enabled by default. If you don't need this it can be disabled by setting:
Per default VerneMQ doesn't accept any client that hasn't been configured using vmq-passwd
. If you want to change this and accept any client connection you can set:
Warning: Setting allow_anonymous=on
completely disables authentication in the broker and plugin authentication hooks are never called! Find more information on the authentication hooks here.
In a production setup you can use the provided password based authentication mechanism, one of the provided authentication Database plugins, or implement your own authentication plugins.
VerneMQ periodically checks the specified password file.
The check interval defaults to 10 seconds and can also be defined in the vernemq.conf
.
Setting the password_reload_interval = 0
disables automatic reloading.
Both configuration parameters can also be changed at runtime using the vmq-admin
script.
Example: to dynamically set the reload interval to 60 seconds on all your cluster nodes, issue the following command on one of the nodes:
sudo vmq-admin set vmq_passwd.password_reload_interval=60 --all
vmq-passwd
is a tool for managing password files for the VerneMQ broker. Usernames must not contain ":"
, passwords are stored in similar format to crypt(3).
How to use vmq-passwd
Options
-c
Creates a new password file. Does not overwrite existing file.
-cf
Creates a new password file. If the file already exists, it will be overwritten.
-D
Deletes the specified user from the password file.
-U
This option can be used to upgrade/convert a password file with plain text passwords into one using hashed passwords. It will modify the specified file. It does not detect whether passwords are already hashed, so using it on a password file that already contains hashed passwords will generate new hashes based on the old hashes and render the password file unusable. Note, with this option neither usernames or passwords may contain
":"
.
passwordfile
The password file to modify.
username
The username to add/update/delete.
Examples
Add a user to a new password file: (you can choose an arbitrary name for the password file, it only has to match the configuration in the VerneMQ configuration file).
Delete a user from a password file
Acknowledgements
The original version of vmq-passwd
was developed by Roger Light (roger@atchoo.org).
vmq-passwd
includes :
software developed by the [OpenSSL
Project](http://www.openssl.org/) for use in the OpenSSL Toolkit.
cryptographic software written by Eric Young
(eay@cryptsoft.com)
software written by Tim Hudson (tjh@cryptsoft.com)
VerneMQ comes with a simple ACL based authorization mechanism which is enabled by default. If you don't need this it can be disabled by setting:
VerneMQ periodically checks the specified ACL file.
The check interval defaults to 10 seconds and can also be defined in the vernemq.conf
.
Setting the acl_reload_interval = 0
disables automatic reloading.
Both configuration parameters can also be changed at runtime using the vmq-admin
script.
Topic access is added with lines of the format:
The access type is controlled using read
or write
. If not provided then read an write access is granted for the topic
. The topic
can use the MQTT subscription wildcards +
or #
.
The first set of topics are applied to all anonymous clients (assuming allow_anonymous = on
). User specific ACLs are added after a user line as follows (this is the username not the client id):
It is also possible to define ACLs based on pattern substitution within the the topic. The form is the same as for the topic keyword, but using pattern as the keyword.
The patterns available for substitution are:
%c
to match the client id of the client
%u
to match the username of the client
The substitution pattern must be the only text for that level of hierarchy. Pattern ACLs apply to all users even if the user keyword has previously been given.
Example:
VerneMQ currently doesn't cancel active subscriptions in case the ACL file revokes access for a topic. It is possible to reauthenticate sessions manually (vmq-admin
)
Anonymous users are allowed to
publish & subscribe to topic bar.
publish to topic foo.
subscribe to topic open_to_all.
User john is allowed to
publish & subscribe to topic foo.
subscribe to topic baz.
publish to topic open_to_all.
VerneMQ supports multiple ways to configure one or many MQTT listeners.
Listeners specify on which IP address and port VerneMQ should accept new incoming connections. Depending on the chosen transport (TCP, SSL, WebSocket) different configuration parameters have to be provided. VerneMQ allows to write the listener configurations in a hierarchical manner, enabling very flexible setups. VerneMQ applies reasonable defaults on the top level, which can be of course overridden if needed.
These are the only default parameters that are applied for all transports, and the only one that are of interest for plain TCP and WebSocket listeners.
These global defaults can be overridden for a specific transport protocol listener.tcp.CONFIG = VAL
, or even for a specific listener listener.tcp.LISTENER.CONFIG = VAL
. The placeholder LISTENER
is freely chosen and is only used as a reference for further configuring this particular listener.
Normally, an MQTT broker hosts one single topic tree. This means that all topics are accessible to all publishers and subscribers (limited by the ACLs you configured, of course). Mountpoints are a way to host multiple topic trees in a single broker. They are completely separated and clients with different topic trees cannot publish messages to each other. This could be useful if you provide MQTT services to multiple separated use cases/verticals or clients, with a single broker. Note that mountpoints are configured via different listeners. As a consequence, the MQTT clients will have to connect to a specific port to connect to a specific topic space (mountpoint).
The mountpoints can be configured on the protocol level or configurred or overridden on the specific listener level.
Since VerneMQ 1.5.0 it is possible to configure which MQTT protocol versions as listener will accept.
VerneMQ supports MQTT 3.1, 3.1.1, and 5.0 (since VerneMQ 1.6.0). To allow these protocol versions, set:
Here 3,4,5
are the protocol level versions corresponding to MQTT 3.1, 3.1.1 and 5.0 respectively. The default value is 3,4
thus allowing MQTT 3.1 and 3.1.1, while MQTT 5.0 is disabled.
Listen on TCP port 1883 and for WebSocket Connections on port 8888:
An additional listener can be added by using a different name. In the example above the name equals to default
and can be used for further configuring this particular listener. The following example demonstrates how an additional listener is defined as well as how the maximum number of connections can be limited for this listener:
VerneMQ listeners can be configured to accept connections from a proxy server that supports the PROXY protocol. This enables VerneMQ to retrieve peer information such as source IP/Port but also PROXY Version 2 protocol TLS client certificate details if the proxy was used to terminate TLS.
To enable the PROXY protocol for tcp listeners use listener.tcp.proxy_protocol = on
or for a specific listener use listener.tcp.LISTENER.proxy_protocol = on
.
If client certificates are used you can set listener.tcp.proxy_protocol_use_cn_as_username = on
which will overwrite the MQTT username set by the client with the common name from the client certificate before authentication and authorization is performed.
VerneMQ supports different Transport Layer Security (TLS) options, which allow for secure communication between MQTT clients and VerneMQ.
TLS provides secure communication between devices by encrypting the data in transit, preventing unauthorized access and ensuring the integrity of the data. VerneMQ supports various TLS options, including the use of certificates, mutual authentication, Pre-Shared Keys and the ability to specify specific ciphersuites and TLS versions.
VerneMQ supports the following the TLS-flavours:
Server Side TLS
TLS-PSK
Mutal TLS (mTLS)
In server-side TLS, the client initiates a TLS handshake with the broker, and the broker responds by sending its certificate. The client verifies the certificate and generates a symmetric key, which is used to encrypt and decrypt data exchanged between the client and broker. Server-side TLS does no further authentication or authorization of the client. The broker later on authenticates and authorizes clients through MQTT.
TLS-PSK (Pre-Shared Key) secures communication between MQTT client and broker using pre-shared keys for authentication. Unlike Service-Side or mutal TLS, which use certificates to authenticate the server and client, TLS-PSK uses a pre-shared secret (a key) to authenticate the endpoints. Clients that support TLS-PSK can use the specified pre-shared keys to authenticate themselves to the broker, providing a lightweight alternative to certificate-based authentication. The key has to be securly stored on the MQTT device.
Mutal TLS (mTLS) provides mutual authentication and encryption of data in transit between MQTT client and Broker. Unlike Server-Side TLS, where only the server is authenticated to the client, mTLS requires both the client and server to authenticate each other before establishing a secure connection.
The decision to use TLS, TLS-PSK, or mTLS depends on your specific use case and security requirements.
Accepting SSL connections on port 8883:
The following configuration snippet enables TLS-PSK authentication on VerneMQ's SSL listener, specifies the location of the pre-shared key file, and sets the list of ciphers to be used for encryption. Clients that support TLS-PSK can use the specified pre-shared keys to authenticate themselves to the broker, providing a lightweight alternative to certificate-based authentication.
This configuration snippet enables TLS-PSK authentication on the VerneMQs SSL listener, specifies the location of the pre-shared key file, and sets the list of ciphers to be used for encryption. Clients that support TLS-PSK can use the specified pre-shared keys to authenticate themselves to the broker, providing a lightweight alternative to certificate-based authentication.
The PSK file contains a list of matching identifiers and psk keys.
If you want to use client certificates to authenticate your clients you have to set the following option:
If you use client certificates and want to use the certificates CN value as a username you can set:
Both options require_certificate
and use_identity_as_username
default to off
.
The same configuration options can be used for securing WebSocket connections, just use wss
as the protocol identifier e.g. listener.wss.require_certificate
.
With SSL, you still need to configure authentication and authorization! That is, set allow_anonymous
to off
, and configure vmq_acl and vmq_passwd or your authentication plugin.
The default listener listener.vmq.clustering
is used for distributing MQTT messages among the cluster nodes.
A quick and simple guide to get started with VerneMQ
VerneMQ is a high-performance, distributed MQTT message broker. It scales horizontally and vertically on commodity hardware to support a high number of concurrent publishers and consumers while maintaining low latency and fault tolerance. To use it, all you need to do is install the VerneMQ package.
Choose your OS and follow the instructions:
It is also possible to run VerneMQ using our Docker image:
If you built VerneMQ from sources, you can add the /bin
directory of your VerneMQ release to PATH
. For example, if you compiled VerneMQ in the /home/vernemq
directory, then add the binary directory (/home/vernemq/_build/default/rel/vernemq/bin
) to your PATH, so that VerneMQ commands can be used in the same manner as with a packaged installation.
To start a VerneMQ broker, use the vernemq start command in your Shell:
A successful start will return no output. If there is a problem starting the broker, an error message is printed to STDERR
.
To run VerneMQ with an attached interactive Erlang console:
A VerneMQ broker is typically started in console mode for debugging or troubleshooting purposes. Note that if you start VerneMQ in this manner, it is running as a foreground process that will exit when the console is closed.
You can close the console by issuing this command at the Erlang prompt:
Once your broker has started, you can initially check that it is running with the vernemq ping command:
The command will respond with pong
if the broker is running or Node <NodeName> not responding to pings
in case it’s not.
As you may have noticed, VerneMQ will warn you at startup when your system’s open files limit (ulimit -n
) is too low. You’re advised to increase the OS default open files limit when running VerneMQ. Read more about why and how in the Open Files Limit documentation.
Configure Non-Standard MQTT Options VerneMQ Supports.
Set the maximum size for client ids, MQTT v3.1 specifies a limit of 23 characters.
This option default to 23
.
Usually, you'll configure permissions on your topic structures using ACLs. In addition to that, topic_max_depth
sets a global maximum value for topic levels. This protects the broker from clients subscribing to arbitrary deep topic levels.
The default value for topic_max_depth
is 10. As an example, this value will allow topics like a/b/c/d/e/f/g/h/i/k
, that is 10 levels. A client running into the topic depth limit will be disconnected and an error will be logged.
This option allows persistent clients (those with clean_session
set to false
) to be removed if they do not reconnect within a certain time frame.
This is a non-standard option. As far as the MQTT specification is concerned, persistent clients are persisted forever.
The expiration period should be an integer followed by one of h
, d
, w
, m
, y
for hour, day, week, month, and year; or never
:
This option defaults to never
.
Limit the maximum publish payload size in bytes that VerneMQ allows. Messages that exceed this size won't be accepted.
Defaults to 0
, which means that all valid messages are accepted. MQTT specification imposes a maximum payload size of 268435455 bytes.
How to setup and configure the HTTP listener.
The VerneMQ HTTP listener is used to serve various VerneMQ subsystems such as:
By default listener runs on port 8888
. To disable the HTTP listener, use a HTTPS listener instead or change the port, adapt the configuration in vernemq.conf
:
You can have multiple HTTP(s) listener listening to different port and running different modules:
This configuration snippet defines two HTTPS listeners with different modules. One for default traffic and one for management traffic. It specifies which HTTP modules will be enabled on each listener, allowing for status, health, and metrics information to be retrieved from the default listener and providing a web-based interface for managing and monitoring VerneMQ through the management listener.
VerneMQ can be installed on CentOS-based systems using the binary package we provide.
Once you have downloaded the binary package, execute the following command to install VerneMQ:
or:
To use the provided binary packages the VerneMQ EULA must be accepted. See Accepting the VerneMQ EULA for more information.
Once you've installed VerneMQ, start it on your node:
You can verify that VerneMQ is successfully installed by running:
If VerneMQ has been installed successfully vernemq
is returned.
Now that you've installed VerneMQ, check out How to configure VerneMQ.
As well as being available as packages that can be installed directly into the operating systems, VerneMQ is also available as a Docker image. Below is an example of how to set up a couple of VerneMQ
To use the provided docker images the VerneMQ EULA must be accepted. See Accepting the VerneMQ EULA for more information.
Sometimes you need to configure a forwarding for ports (on a Mac for example):
This starts a new node that listens on 1883 for MQTT connections and on 8080 for MQTT over websocket connections. However, at this moment the broker won't be able to authenticate the connecting clients. To allow anonymous clients use the DOCKER_VERNEMQ_ALLOW_ANONYMOUS=on
environment variable.
Warning: Setting allow_anonymous=on
completely disables authentication in the broker and plugin authentication hooks are never called! See more information about the authentication hooks here.
This allows a newly started container to automatically join a VerneMQ cluster. Assuming you started your first node like the example above you could autojoin the cluster (which currently consists of a single container 'vernemq1') like the following:
(Note, you can find the IP of a docker container using docker inspect <CONTAINER_NAME> | grep \"IPAddress\"
).
To check if the above containers have successfully clustered you can issue the vmq-admin
command:
Everything you must know to properly configure VerneMQ
Every VerneMQ node has to be configured as the default configuration probably does not match your needs. Depending on the installation method and chosen platform the configuration file vernemq.conf
resides at different locations. If VerneMQ was installed through a Linux package the default location for the configuration file is /etc/vernemq/vernemq.conf
.
vernemq.conf
fileA single setting is handled on one line.
Lines are structured Key = Value
Any line starting with # is a comment, and will be ignored.
You certainly want to try out VerneMQ right away. To just check the broker without configured authentication for now, you can allow anonymous access:
Set allow_anonymous = on
By default the vmq_acl
authorization plugin is enabled and configured to allow publishing and subscribing to any topic (basically allowing everything), check the section on file-based authorization for more information.
Setting allow_anonymous=on
completely disables authentication in the broker and plugin authentication hooks are never called! Find the details on all the authentication hooks here. In a production system you should configure vmq_acl
to be less permissive or configure some other plugin to handle authorization.
Configure how VerneMQ handles certain aspects of MQTT
Set the time in seconds after a QoS=1 or QoS=2
message has been sent that VerneMQ will wait before retrying when no response is received.
This option default to 20
seconds.
This option defines the maximum number of QoS 1 or 2 messages that can be in the process of being transmitted simultaneously.
Defaults to 20
messages, use 0
for no limit. The inflight window serves as a protection for sessions, on the incoming side.
The maximum number of messages to hold in the queue above those messages that are currently in flight. Defaults to 1000
. Set to -1
for no limit. This option protects a client session from overload by dropping messages (of any QoS).
Defaults to 1000
messages, use -1
for no limit. This parameter was named max_queued_messages
in 0.10.*
. Note that 0
will totally block message delivery from any queue!
This option specifies the maximum number of QoS 1 and 2 messages to hold in the offline queue.
Defaults to 1000
messages, use -1
for no limit, use 0
if no messages should be stored.
In contrast to the session based inflight window, max_online_messages and max_offline_messages serves as a protection of queues, on the outgoing side.
When an offline session transists to online, by default VerneMQ will adhere to the queue sizes also for moving data from the offline queue to the online queue. Therefore, if max_offline_messages > max_online_message VerneMQ will start dropping messages. It is possible to override this behaviour and allow VerneMQ to move all messages from the offline queue to the online queue. The queue will then batched (or streamed) to the subscribers, and the messages are read from disk in batches as well. The additional memory needed thus is just the amount needed to store references to those messages and not the messages themselves.
VerneMQ supports multiple ways to authenticate and authorize new client connections using a database.
VerneMQ supports authentication and authorization using a number of popular databases and the below sections describe how to configure the different databases.
The database drivers are handled using the vmq_diversity
plugin and it therefore needs to be enabled:
The vmq_diversity
plugin makes it possible to extend VerneMQ using Lua. The documentation can be found here.
When using database based authentication/authorization the enabled-by-default file based authentication and authorization are most likely not needed and should be disabled:
You must set allow_anonymous = off
, otherwise VerneMQ won't use the database plugin for authentication and authorization.
In order to use a database for authentication and authorization the database must be properly configured and the auth-data (username, clientid, password, acls) to be present. The following sections show some sample requests that can be used to insert such data.
While the handling of authentication differs among the different databases, the handling of ACLs is roughly identical and make use of a JSON array containing one or many ACL objects per configured client.
The database integrations will cache the ACLs when the client connects avoiding expensive database lookups for each publish or subscribe message. The cache entries are evicted when the client disconnects.
A minimal publish & subscribe ACL JSON object takes the following form:
General ACL
The pattern is a MQTT topic string that can contain MQTT wildcards, but also the template variables %m
(mountpoint), %u
(username), and %c
(client id) which are automatically substituted with the auth data provided.
Publish ACL
The publish ACL makes it possible to control the maximum QoS and payload size that is allowed, and if the message is allowed to be retained.
Moreover, the publish ACL makes it possible to modify the properties of a published message through specifying one or multiple modifiers
. Please note that the modified message isn't re-validated by the ACL.
Subscribe ACL
The subscribe ACL makes it possible to control the maxium QoS a client is allowed to subscribe to.
Like the publish ACL, the subscribe ACL makes it possible to change the current subscription request by returning a custom set of topic/qos pairs. Please note that the modified subscription isn't re-validated by the ACL.
When deciding on which database to use one has to consider which kind of password hashing and key derivation functions are available and required. Different databases provide different mechanisms, for example PostgreSQL provides the pgcrypto
module which supports verifying hashed and salted passwords, while Redis has no such features. VerneMQ therefore also provides client-side password verification mechanisms such as bcrypt
.
There is a trade-off between verifying passwords on the client-side versus on the server-side. Verifying passwords client-side of course means doing the computations on the VerneMQ broker and this takes away resources from other tasks such as routing messages. With hashing functions such as bcrypt
which are designed specifically to be slow (proportional to the number of rounds) in order to make brute-force attacks infeasible, this can become a problem. For example, if verifying a password with bcrypt
takes 0.5 seconds then on a single threaded core 2 verifications/second are possible and using 4 single threaded cores 8 verifications/second. So, the number of rounds/security paramenters have a direct impact on the max number of verifications/second and hence also the maximum arrival rate of new clients per second.
For each database it is specified which password verification mechanisms are available and if they are client-side or server-side.
Note that currently bcrypt version `2a` (prefix `$2a$`) is supported.
To enable PostgreSQL authentication and authorization the following need to be configured in the vernemq.conf
file:
In case your Postgresql database requires SSL, you'll have to tell the plugin:
Consult the vernemq.conf
file for more info about additional options:
PostgreSQL hashing methods:
The following SQL DDL must be applied, the pgcrypto
extension is required if using the server-side crypt
hashing method:
To enter new ACL entries use a query similar to the following:
To enable PostgreSQL authentication and authorization the following need to be configured in the vernemq.conf
file:
Notice that if the CockroachDB installation is secure, then TLS is required. If using an insecure installation without TLS, then vmq_diversity.cockroachdb.ssl
can be set to off
.
CockroachDB hashing methods:
The following SQL DDL must be applied:
To enter new ACL entries use a query similar to the following, the example is for the bcrypt
hashing method:
For MySQL authentication and authorization configure the following in vernemq.conf
:
MySQL hashing methods:
It should be noted that all the above options stores unsalted passwords which are vulnerable to rainbow table attacks, so the threat-model should be considered carefully when using these. Also note the methods marked with *
are no longer considered secure hashes.
The following SQL DDL must be applied:
To enter new ACL entries use a query similar to the following, the example uses PASSWWORD
to for password hashing:
Note, the PASSWORD()
hashing method needs to be changed according to the configuration set in vmq_diversity.mysql.password_hash_method
, it supports the options password
, md5
, sha1
and sha256
. Learn more about the MySQL equivalent for those methods on https://dev.mysql.com/doc/refman/8.0/en/encryption-functions.html.
The default password
method has been deprecated since MySQL 5.7.6 and not usable with MySQL 8.0.11+. Also, the MySQL authentication method caching_sha2_password
is not supported. This is the default in MySQL 8.0.4 and later, so you need to add: default_authentication_plugin=mysql_native_password
under [mysqld] in e.g. /etc/mysql/my.cnf.
For MongoDB authentication and authorization configure the following in vernemq.conf
:
VerneMQ supports MongoDB's DNS SRV record lookup to fetch a seed list. Specify the hostname of hosted database as a srv
option instead of host
and port
. VerneMQ will randomly choose a host/port combination from the seed list returned in the DNS SRV record. MongoDB SRV connections use TLS by default. You will need to configure TLS support for MongoDB for most SRV connections.
MongoDB supports a number of node types in replica sets. The built-in MongoDB support simply connects to the host and port specified. It does not differentiate between primary or secondary nodes in MongoDB replica sets.
MongoDB hashing methods:
Insert the ACL using the mongo
shell or any software library. The passhash
property contains the bcrypt hash of the clients password.
For Redis authentication and authorization configure the following in vernemq.conf
:
Redis hashing methods:
Insert the ACL using the redis-cli
shell or any software library. The passhash
property contains the bcrypt hash of the clients password. The key is an encoded JSON array containing the mountpoint, username, and client id. Note that no spaces are allowed between the array items.
VerneMQ can be installed on Debian or Ubuntu-based systems using the binary package we provide.
Once you have downloaded the binary package, execute the following command to install VerneMQ:
Note: Replace bionic with appropriate OS version such as focal/trusty/xenial.
You can verify that VerneMQ is successfully installed by running:
If VerneMQ has been installed successfully Status: install ok installed
is returned.
To use the provided binary packages the VerneMQ EULA must be accepted. See Accepting the VerneMQ EULA for more information.
Once you've installed VerneMQ, start it on your node:
The whereis vernemq
command will give you a couple of directories:
Now that you've installed VerneMQ, check out How to configure VerneMQ.
Configure VerneMQ Logging.
Where should VerneMQ emit the default console log messages (which are typically at info
severity):
VerneMQ defaults to log the console messages to a file, which can specified by:
This option defaults to /var/log/vernemq/console.log
for Ubuntu, Debian, RHEL and Docker installs.
The default console logging level info
could be setting one of the following:
VerneMQ log error messages by default. One can change the default behaviour by setting:
VerneMQ defaults to log the error messages to a file, which can specified by:
This option defaults to /var/log/vernemq/error.log
for Ubuntu, Debian, RHEL and Docker installs.
VerneMQ log crash messages by default. One can change the default behaviour by setting:
VerneMQ defaults to log the crash messages to a file, which can specified by:
This option defaults to /var/log/vernemq/crash.log
for Ubuntu, Debian, RHEL and Docker installs.
The maximum sizes in bytes of inidividual messages in the crash log defaults to 64KB
but can be specified by:
VerneMQ rotate crash logs. By default, the crash log file is rotated at midnight or when the size exceeds 10MGB
. This behaviour can be changed by setting:
The default number of rotated log files is 5 and can be set with the option:
VerneMQ supports logging to SysLog, enable it by setting:
Logging to SysLog is disabled by default.
Managing VerneMQ Plugins
Many aspects of VerneMQ can be extended using plugins. The standard VerneMQ package comes with several official plugins. You can show the enabled & running plugins via:
The command above displays all the enabled plugins together with the hooks they implement:
The table will show the following information:
name of the plugin
type (application or single module)
all the hooks implemented in the plugin
the exact module and function names (M:F/A
) implementing those hooks.
As an example on how to read the table: the vmq_passwd:auth_on_register/5
function is the actual implementation of the auth_on_register
hook in the vmq_passwd
application plugin.
In addition, you can conclude that the plugin is currently running, as it shows up in the table.
To display information on internal plugins, add the --internal
flag. The table below shows you that the generic metadata application and the generic message store are actually internal plugins.
This enables the ACL plugin. Because the vmq_acl
plugin is already started the above command won't succeed. In case the plugin sits in an external directory you must also to provide the --path=PathToPlugin
.
To make a plugin start when VerneMQ boots, you need to tell VerneMQ in the main vernemq.conf
file.
The general syntax to enable a plugin is to add a line like plugins.pluginname = on
. Using the vmq_passwd
plugin as an example:
If the plugin is external (all your own VerneMQ plugin will be of this category), the path can be specified like this:
Plugin specific settings can be configured via myplugin.somesetting = value
, like:
Check the vernemq.conf
file for additional details and examples.
Configure WebSocket Listeners for VerneMQ.
VerneMQ supports the WebSocket protocol out of the box. To be able to open a WebSocket connection to VerneMQ, you have to configure a WebSocket listener or Secure WebSocket listener in the vernemq.conf
file first:
Keep in mind that you'll use MQTT-over-WebSocket, so you will need a Javascript library that implements the MQTT client behaviour. We have used the as well as
You won't be able to open WebSocket connections on a base URL, always add the /mqtt
path.
Working with shared subscriptions
A shared subscription is a mechanism for distributing messages to a set of subscribers to shared subscription topic, such that each message is received by only one subscriber. This contrasts with normal subscriptions where each subscriber will receive a copy of the published message.
A shared subscription is on the form $share/sharename/topic
and subscribers to this topic will receive messages published to the topic topic
. The messages will be distributed according to the defined distribution policy.
The MQTT spec only defines shared subscriptions for protocol version 5. VerneMQ supports shared subscription for v5 (as per the specification) and for v3.1.1 (backported feature).
When subscribing to a shared subscription using command line tools remember to quote the topic as some command line shells, like bash
, will otherwise expand the $share
part of the topic as an environment variable.
Currently four message distribution policies for shared subscriptions are supported: prefer_local
, random
, local_only
and prefer_online_before_local
. Under the random
policy messages will be published to a random member of the shared subscription, if any exist. Under the prefer_local
policy messages will be delivered to a random node-local member of the shared subscription, if none exist, the message will be delivered to a random member of the shared subscription on a remote cluster node. The prefer_online_before_local
policy works similar to prefer_local
, but will look for an online subscriber on a non-local node, if there are only offline subscribers on the local one. Under the local_only
policy message will be delivered to a random node-local member of the shared subscription.
When a messages is being delivered to subscribers of a shared subscription, the message will be delivered to an online subscriber if possible, otherwise the message will be delivered to an offline subscriber.
Note that Shared Subscriptions still fully operate under the MQTT specification (be it MQTT 5.0 or backported to older protocol versions). Be aware of this, especially regarding QoS and clean_session configurations. This also means that there is no shared offline message queue for all clients, but each client has its own offline message queue. MQTT v5 shared subscriptions thus have a different behaviour than e.g. Kafka where consumers read from a single shared message queue.
Subscriptions Note: When subscribing to a shared topic, make sure to escape the $
So, for dash or bash shells
Publishing Note: When publishing to a shared topic, do not include the prefix $share/group/
as part of the publish topic name
Configure a couple of hidden options for VerneMQ
There are a couple of hidden options you can set in the vernemq.conf
file. Hidden means that you have to add and set the value explicitly. Hidden options still have default values. Changing them should be considered advanced, possibly with the exception of setting a max_message_rate
.
Specify how the queue should deliver messages when multiple sessions are allowed. In case of fanout
all the attached sessions will receive the message, in case of balance
an attached session is choosen randomly.
The feature to enable multiple sessions will be deprecated in VerneMQ 2.0.
Specify how queues should process messages, either the fifo
or lifo
way, with a default setting of fifo
. The setting will apply globally, that is, for every spawned queue in a VerneMQ broker. (You can override the queue_type
setting in plugins in the auth_on_register
hook).
Specifies the maximum incoming publish rate per session per second. Depending on the underlying network buffers this rate isn't enforced. Defaults to 0
, which means no rate limits apply. Setting to a value of 2
limits any publisher to 2 messages per second, for instance.
Due to the eventually consistent nature of the subscriber store it is possible that during queue migration messages still arrive on the old cluster node. This parameter enables compensation for that fact by keeping the queue around for some configured time (in seconds) after it was migrated to the other cluster node.
Specifies the number of messages that are delivered to the remote node per drain step. A large value will provide a faster migration of a queue, but increases the waste of bandwidth in case the migration fails.
Allows to select a new default reg_view. A reg_view is a pre-defined way to route messages. Multiple views can be loaded and used, but one has to be selected as a default. The default routing is vmq_reg_trie
, i.e. routing via the built-in trie data structure.
A list of views that are started during startup. It's only used in plugins that want to choose dynamically between routing reg_views.
An integer specifying how many bytes are buffered in case the remote node is not available. Default is 10000
Defines the maximum lifetime of MQTT connection in seconds. Max_connection_lifetime can be set per-listener. This is an implementation of MQTT security proposal: "Servers may close the Network Connection of Clients and require them to re-authenticate with new credentials."
It is possible to override the value in auth_on_register(_m5) to a lower limit.
MQTT consumers can share and loadbalance a topic subscription.
Consumer session balancing has been deprecated and will be removed in VerneMQ 2.0. Use instead.
Sometimes consumers get overwhelmed by the number of messages they receive. VerneMQ can load balance between multiple consumer instances subscribed to the same topic with the same ClientId.
To enable session balancing, activate the following two settings in vernemq.conf
Currently those settings will activate consumer session balancing globally on the respective node. Restricting balancing to specific consumers only, will require a plugin. Note that you cannot balance consumers spread over different cluster nodes.
method | client-side | server-side |
---|---|---|
method | client-side | server-side |
---|---|---|
method | client-side | server-side |
---|---|---|
method | client-side | server-side |
---|---|---|
method | client-side | server-side |
---|---|---|
Path | Description |
---|---|
bcrypt
✓
crypt
✓
bcrypt
✓
sha256
✓
sha256
✓
md5*
✓
sha1*
✓
password
✓
bcrypt
✓
bcrypt
✓
/usr/sbin/vernemq:
the vernemq and vmq-admin commands
/usr/lib/vernemq
the vernemq package
/etc/vernemq
the vernemq.conf file
/usr/share/vernemq
the internal vernemq schema files
/var/lib/vernemq
the vernemq data dirs for LevelDB (Metadata Store and Message Store)
VerneMQ uses Google's LevelDB as a fast storage backend for messages and subscriber information. Each VerneMQ node runs its own embedded LevelDB store.
There's not much you need to know about LevelDB and VerneMQ. One really important thing to note is that LevelDB manages its own memory. This means that VerneMQ will not allocate and free memory for LevelDB. Instead, you'll have to tell LevelDB how much memory it can use up by setting leveldb.maximum_memory.percent
.
Configuring LevelDB memory:
LevelDB means business with its allocated memory. It will eventually end up with the configured max, making it look like there's a memory leak, or even triggering OOM kills. Keep that in mind when configuring the percentage of RAM you give to LevelDB. Historically, the configured default was at 70% percent of RAM, which is too high for a lot of use cases and can be safely lowered.
(e)LevelDB exposes a couple of additional configuration values that we link here for the sake of completeness. You can change all the values mentioned in the eleveldb schema file. VerneMQ mostly uses the configured defaults, and for most use cases it should not be necessary to change those.
Everything you must know to properly configure and deploy a VerneMQ Cluster
VerneMQ uses the Erlang distribution mechanism for most inter-node communication. VerneMQ identifies other machines in the cluster using Erlang identifiers (e.g. VerneMQ@10.9.8.7
). Erlang resolves these node identifiers to a TCP port on a given machine via the Erlang Port Mapper daemon (epmd) running on each cluster node.
By default, epmd binds to TCP port 4369 and listens on the wildcard interface. For inter-node communication, Erlang uses an unpredictable port by default; it binds to port 0, which means the first available port.
For ease of firewall configuration, VerneMQ can be configured to instruct the Erlang interpreter to use a limited range of ports. For example, to restrict the range of ports that Erlang will use for inter-Erlang node communication to 6000-7999, add the following lines to vernemq.conf on each VerneMQ node:
The settings above are only used for distributing subscription updates and maintenance messages. For distributing the 'real' MQTT messages the proper vmq
listener must be configured in the vernemq.conf.
It isn't necessary to configure the same port on every machine, as the nodes will probe each other for this information.
Attributions:
This section, "VerneMQ Inter-node Communication", is a derivative of Security and Firewalls by Riak, used under Creative Commons Attribution 3.0 Unported License.
On every VerneMQ node you'll find the vmq-admin
command line tool in the release's bin directory (in case you use the binary VerneMQ packages, vmq-admin
should already be callable in your path, without changing directories). It has different sub-commands that let you check for status, start and stop listeners, re-configure values and a couple of other administrative tasks.
vmq-admin
has different sub-commands with a lot of respective options. You can familiarize yourself by using the --help
option on the different levels of vmq-admin
. You might see additional sub-commands in case integrated plugins are running (vmq-admin bridge
is an example).
vmq-admin
works by RPC'ing into the local VerneMQ node by default. For most commands you can add a --node
option and set values on other cluster nodes, even if the local VerneMQ node is down.
To check for the global cluster state in case the local VerneMQ node is down, you'll have to go to another node though.
vmq-admin
is a live re-configuration utility. Please note that all dynamically configured values will be reset by vernemq.conf upon broker restart.
As a consequence, it's good practice to keep track of the applied changes when re-configuring a broker with vmq-admin
. If needed, you can then persist changes by adding them to the vernemq.conf file.
Everything you must know to properly configure and deploy a VerneMQ Cluster
VerneMQ can be easily clustered. Clients can then connect to any cluster node and receive messages from any other cluster nodes. However, the MQTT specification gives certain guarantees that are hard to fulfill in a distributed environment, especially when network partitions occur. We'll discuss the way VerneMQ deals with network partitions in its own subsection
Set the Cookie! All cluster nodes need to be configured to use the same Cookie value. It can be set in the vernemq.conf
with the distributed_cookie
setting. Set the Cookie to a private value for security reasons!
For a successful VerneMQ cluster setup, it is important to choose proper VerneMQ node names. In vernemq.conf
change the nodename = VerneMQ@127.0.0.1
to something appropriate. Make sure that the node names are unique within the cluster. Read the section on VerneMQ Inter-node Communication if firewalls are involved.
Before you go ahead and experience the full power of clustering VerneMQ, be aware of its stateful character. An MQTT broker is a stateful application and a VerneMQ cluster is a stateful cluster.
What does this mean in detail? It means that clustered VerneMQ nodes will share information about connected clients and sessions but also meta-information about the cluster itself.
For instance, if you stop a cluster node, the VerneMQ cluster will not just forget about it. It will know that there's a node missing and it will keep looking for it. It will know there's a netsplit situation and it will heal the partition when the node comes back up. But if the missing node never comes back there's an eternal netsplit. (still resolvable by making the missing node explicitly leave).
This doesn't mean that a VerneMQ cluster cannot dynamically grow and shrink. But it means you have to tell the cluster what you intend to do, by using join and leave commands.
If you want a cluster node to leave the cluster, well... use the vmq-admin cluster leave
command. If you want a node to join a cluster use the vmq-admin cluster join
command.
Makes sense? Go ahead and create your first VerneMQ cluster!
The discovery-node can be any other node. It is not necessary to always choose the same node as discovery node. It is important that only a node with an empty history joins a cluster. One should not try to add a node, that had already traffic on it, to a cluster.
A cluster leave will actually do a lot more work, and gives you some options to choose. The node leaving the cluster will go to great length trying to migrate its existing queues to other nodes. As queues (online or offline) are live processes in a VerneMQ node, it will only exit after it has migrated them.
Let's look at the steps in detail:
vmq-admin cluster leave node=<NodeThatShouldGo>
This first step will only stop the MQTT Listeners of the node to ensure that no new connections are accepted. It will not interrupt the existing connections, and behind the scenes the node will not leave the cluster yet. Existing clients are still able to publish and receive messages at this point.
The idea is to give a grace period with the hope that existing clients might re-connect (to another node). If you have decided that this period is over (after 5 minutes or 1 day is up to you), you proceed with step 2: disconnecting the rest of the clients.
vmq-admin cluster leave node=<NodeThatShouldGo> -k
The -k
flag will delete the MQTT Listeners of the leaving node, taking down all live connections. If this is what you want from the beginning, you can do this right away as a first step.
Now, queue migration is triggered by clients re-connecting to other nodes. They will claim their queue and it will get migrated. Still, there might be some offline queues remaining on the leaving node, because they were pre-existing or because some clients do not re-connect and do not reclaim their queues.
VerneMQ will throw an exception if there are remaining offline queues after a configurable timeout. The default is 60 seconds, but you can set it as an option to the cluster leave command. As soon as the exception shows in console or console.log, you can actually retry the cluster leave command (including setting a migration timeout (-t
), and an interval in seconds (-i
) indicating how often information on the migration progress should be printed to the console.log):
vmq-admin cluster leave node=<NodeThatShouldGo> -k -i 5 -t 120
After this timeout VerneMQ will forcefully migrate the remaining offline queues to other cluster nodes in a round robin manner. After doing that, it will stop the leaving VerneMQ node.
Note 1: While doing a cluster leave, it's a good idea to tail -f the VerneMQ console.log to see queue migration progress.
Note 2: A node that has left the cluster is considered dead. If you want to reuse that node as a single node broker, you have to (backup & rename &) delete the whole VerneMQdata
directory and start with a new directory. (It will be created automatically by VerneMQ at boot).
Otherwise that node will start looking for its old cluster peers when you restart it.
So, case A was the happy case. You left the cluster with your node in a controlled manner, and everything worked, including a complete queue (and message) transfer to other nodes.
Let's look at the second possibility where the node is already down. Your cluster is still counting on it though and possibly blocking new subscription for that reason, so you want to make the node leave.
To do this, use the same command(s) as in the first case. There is one important consequence to note: by making a stopped node leave, you basically throw away persistant queue content, as VerneMQ won't be able to migrate or deliver it.
Let's repeat that to make sure:
Case B: Currently the persisted QoS 1 & QoS 2 messages aren't replicated to the other nodes by the default message store backend. Currently you will lose the offline messages stored on the leaving node.
How does VerneMQ deals with Network Partitions aka. Netsplits.
This section elaborates how a VerneMQ cluster deals with network partitions (aka. netsplit or split brain situation). A netsplit is mostly the result of a failure of one or more network devices resulting in a cluster where nodes can no longer reach each other.
VerneMQ is able to detect a network partition, and by default it will stop serving CONNECT
, PUBLISH
, SUBSCRIBE
, and UNSUBSCRIBE
requests. A properly implemented client will always resend unacked commands and messages are therefore not lost (QoS 0 publishes will be lost). However, the time window between the network partition and the time VerneMQ detects the partition much can happen. Moreover, this time frame will be different on every participating cluster node. In this guide we're referring to this time frame as the Window of Uncertainty.
The behaviour during a netsplit is completely configurable via allow_register_during_netsplit
, allow_publish_during_netsplit
, allow_subscribe_during_netsplit
, and allow_unsubscribe_during_netsplit
. These options supersede the trade_consistency
option. In order to reach the same behaviour as trade_consistency = on
all the mentioned netsplit options have to set to on
.
VerneMQ follows an eventually consistent model for storing and replicating the subscription data. This also includes retained messages.
Due to the eventually consistent data model it is possible that during the Window of Uncertainty a publish won't take into account a subscription made on a remote node (in another partition). Obviously, VerneMQ can't deliver the message in this case. The same holds for delivering retained messages to remote subscribers.
last will
messages that are triggered during the Window of Uncertainty will be delivered to the reachable subscribers. Currently during a netsplit, but after the Window of Uncertainty last will messages will be lost.
Normally, client registration is synchronized using an elected leader node for the given client id. Such a synchronization removes the race condition between multiple clients trying to connect with the same client id on different nodes. However, during the Window of Uncertainty it is currently possible that VerneMQ fails to disconnect a client connected to a different node. Although this scenario sounds like artificially crafted it is possible to end up with duplicate clients connected to the cluster.
As soon as the partition is healed, and connectivity reestablished, the VerneMQ nodes replicate the latest changes made to the subscription data. This includes all the changes 'accidentally' made during the Window of Uncertainty. Using Dotted Version Vectors VerneMQ ensures that convergence regarding subscription data and retained messages is eventually reached.
Inspecting the retained message store
To list the retained messages simply invoke vmq-admin retain show
:
Note, by default a maximum of 100 results are returned. This is a mechanism to protect the from overload as there can be millions of retained messages. Use --limit=<RowLimit>
to override the default value.
Besides listing the retained messages it is also possible to filter them:
In the above example we list only the payload for the topic some/topic
.
Another example where all topics are list with retained messages with a specific payload:
See the full set of options and documentation by invoking vmq-admin retain show --help
:
Managing VerneMQ live config values.
You can dynamically re-configure most of VerneMQ's settings on a running node by using the vmq-admin set
command.
The following config values can be handled dynamically:
Settings dynamically configured with the vmq-admin set
command will be reset by vernemq.conf upon broker restart.
Let's change the max_client_id_size
as an example. (We might have noticed that some clients can't login because their client ID is too long, but we don't want to restart the broker for that). Note that you can also set multiple values with the same command.
You can show one or multiple values in a simple table:
Managing VerneMQ tcp listeners
You can configure as many listeners as you wish in the vernemq.conf file. In addition to this, the vmq-admin listener
command let's you configure, start, stop and delete listeners on the fly. Those can be MQTT, WebSocket or Cluster listeners, in the command line output they will be tagged mqtt, ws or vmq accordingly.
To get info on a listener sub-command, invoke it with the --help option. Example: vmq-admin listener start --help
Listeners configured with the vmq-admin listener
command will not survive a broker restart. Live changes to listeners configured in vernemq.conf are possible, but the vernemq.conf listeners will just be restarted with a broker restart.
This will start an MQTT listener on port 1884
and IP address 192.168.1.50
. If you want to start a WebSocket listener, just tell VerneMQ by adding the --websocket
flag. There are more options, mainly for configuring SSL (use vmq-admin listener start --help
).
You can isolate client connections accepted by a certain listener from other clients by setting a mountpoint.
To start an MQTT listener using defaults, just set the port and IP address as a minimum.
You can add the -k
or --kill_sessions
switch to that command. This will disconnect all client connections setup by that listener. In combination with a mountpoint, this can be useful for terminating clients for a specific application, or to force re-connects to another cluster node (to prepare for a cluster leave for your node).
Inspecting and managing MQTT sessions
VerneMQ comes with powerful tools for inspecting the state of MQTT sessions. To list current MQTT sessions simply invoke vmq-admin session show
:
To see detailed information about the command see vmq-admin session show --help
.
The command is able to show a lot of different information about a client, for example the client id, the peer host and port if the client is online or offline and much more, see vmq-admin session show --help
for details. Further the information can also be used to filter information which is very helpful when wanting to narrow down the information to a single client.
A sample query which lists only the node where the client session exists and if the client is online would look like the following:
Note, by default a maximum of 100 rows are returned from each node in the cluster. This is a mechanism to protect the cluster from overload as there can be millions of MQTT sessions and resulting rows. Use --limit=<RowLimit>
to override the default value.
Listing the clients and the subscriptions one can do the following:
And to list only the clients subscribed to the topic some/topic
:
To figure out when the queue for a persisted session (clean_session=false) was created and when the client last connected one can use the --queue_started_at
and --session_started_at
to list the POSIX timestamps (in microseconds):
Besides the examples above it is also possible to inspect the number of online or offline messages as well as their payloads and much more. See vmq-admin session show --help
for an exhaustive list of all the available options.
VerneMQ also supports disconnecting clients and reauthorizing client subscriptions. To disconnect a client and cleanup store messages and remove subscriptions one can invoke:
See vmq-admin session disconnect --help
for more options and details.
To reauthorize subscriptions for a client issue the following command:
This works by reapplying the logic in any installed auth_on_subscribe
or auth_on_subscribe_m5
plugin hooks to check the validity of the existing topics and removing those that are no longer allowed. In the example above the reauthorization of the client subscriptions resulted in no changes.
VerneMQ can interface with other brokers (and itself) via MQTT bridges.
Bridges are a non-standard way (but de-facto standard) among MQTT broker implementations to connect two different MQTT brokers. Over a bridge, the topic tree of a remote broker becomes part of the topic tree on the local broker. VerneMQ bridges support plain TCP connections as well as SSL connections.
A bridge will be a point-to-point connection between 2 brokers, but can still forward all the messages from all cluster nodes to another cluster.
The VerneMQ bridge plugin currently forwards messages using MQTT protocol version 3.1.1. MQTT v5 messages will still be forwarded but be aware that metadata like user-defined properties will be dropped.
The MQTT bridge plugin (vmq_bridge
) is distributed with VerneMQ as an integrated plugin but is not enabled by default. After configuring the bridge as described below, make sure to enable the plugin by setting (vernemq.conf
):
See for more information on working with plugins.
Basic information on the configured bridges can be displayed on the admin CLI:
The vmq-admin bridge
command is only available when the bridge plugin is running.
To configure vmq_bridge
you need to edit the bridge section of the vernemq.conf
file to set endpoints and mapping topics. A brigde can push or pull messages, as defined in the topic pattern list.
Setup a bridge to a remote broker:
Different connection parameters can be set:
Define the topics the bridge should incorporate in its local topic tree (by subscribing to the remote), or the topics it should export to the remote broker (by publishing to the remote). We share a similar configuration syntax to that used by the Mosquitto broker:
topic
defines a topic pattern that is shared between the two brokers. Any topics matching the pattern (which may include wildcards) are shared. The second parameter defines the direction that the messages will be shared in, so it is possible to import messages from a remote broker usingin
, export messages to a remote broker usingout
or share messages inboth
directions. If this parameter is not defined, VerneMQ defaults toout
. The QoS level defines the publish/subscribe QoS level used for this topic and defaults to0
. (Source: mosquitto.conf)
The local-prefix
and remote-prefix
can be used to prefix incoming or outgoing publish messages.
Currently the #
wildcard is treated as a comment from the configuration parser, please use *
instead.
A simple example:
SSL bridges support the same configuration parameters as TCP bridges, but need further instructions for handling the SSL specifics:
MQTT Bridges that are initiated from the source broker (push bridges) are started when VerneMQ boots and finds a bridge configuration in the vernemq.conf
file. Sometimes it's useful to restart MQTT bridges without restarting a broker. This can be done by disabling, then enabling the vmq_bridge
plugin and manually calling the bridge start
command:
Everything you need to know to work with the VerneMQ HTTP administration interface
The VerneMQ HTTP API is enabled by default and installs an HTTP handler on http://localhost:8888/api/v1
. To read more about configuring the HTTP listener, see . You can configure a HTTP listener, or a HTTPS listener to serve the HTTP API v1.
The VerneMQ HTTP API uses basic authentication where an API key is passed as the username and the password is left empty, as an alternative the x-api-key header option can be used. API keys have a scope and (optional) can have an expiry date. So the first step to us the HTTP API is to create an API key.
Each HTTP Module can be protected by an API key. An API key can be limited to a certain http module or further restrict some functionality within the http module. The scope used by the management API is "mgmt". Currently, the following scopes are supported "status", "mgmt", "metrics", "health".
or with scope and an expiry date (in local time)
The keys are persisted and available on all cluster nodes.
To list existing keys do:
To add an API key of your own choosing, do:
To delete an API key do:
You can specifiy the minimal length of an API key (default: 0) in vernemq.conf
or a set a max duration of an API key before it expires (default: undefined)
Please note that changing those settings after some api keys have already been created has no influence on already created keys.
You can enable or disable API key authentication per module, or per module per listener.
Possible modules are vmq_metrics_http,vmq_http_mgmt_api, vmq_status_http, vmq_health_http. Possible values for auth.mode are noauth or apikey.
The API is using basic auth where the API key is passed as the username. An example using curl
would look like this:
The mapping between vmq-admin
and the HTTP API is straightforward, and if one is already familiar with how the vmq-admin
tool works, working with the API should be easy. The mapping works such that the command part of a vmq-admin
invocation is turned into a path, and the options and flags are turned into the query string.
A mandatory parameter like the client-id
in the vmq-admin session disconnect client-id=myclient
command should be translated as: ?client-id=myclient
.
An optional flag like --cleanup
in the vmq-admin session disconnect client-id=myclient --cleanup
command should be translated as: &--cleanup
Let's look at the cluster join command as an example, which looks like this:
This turns into a GET request:
To test, run it with curl
:
And the returned response would look like:
Below are some other examples.
Request:
Curl:
Response:
Request:
Curl:
Response:
Request:
Curl:
Response:
Request:
Curl:
Response:
Request:
Curl:
Response:
Request:
Curl:
Response:
The VerneMQ HTTP API is a wrapper over the CLI tool, and anything that can be done using vmq-admin
can be done using the HTTP API. Note that the HTTP API is therefore subject to any changes made to the vmq-admin
tools and their flags & options structure. All requests are performed doing a HTTP GET and if no errors occurred an HTTP 200 OK code is returned with a possible non-empty JSON payload.
Description and Configuration of the $SYSTree Monitoring Feature
The systree functionality is enabled by default and reports the broker metrics at a fixed interval defined in the vernemq.conf
. The metrics defined here are transformed to MQTT topics e.g. mqtt_publish_received
is transformed to $SYS/<nodename>/mqtt/publish/received
. <nodename>
is your node's name, as configured in the vernemq.conf
. To find it, you can grep the file for it: grep nodename vernemq.conf
The complete list of metrics can be found here.
This option defaults to 20000
milliseconds.
If the systree feature is not required it can be disabled in vernemq.conf
The feature and the interval can be changed at runtime using the vmq-admin
script.
Usage: vmq-admin set = ... [[--node | -n] | --all]
Example: vmq-admin set systree_interval=60000 -n VerneMQ@127.0.0.1
Examples:
Real-time inspection
When working with a system like VerneMQ sometimes when troubleshooting it would be nice to know what a client is actually sending and receiving and what VerneMQ is doing with this information. For this purpose VerneMQ has a built-in tracing mechanism which is safe to use in production settings as there is very little overhead in running the tracer and has built-in protection mechanisms to stop traces that produce too much information.
To trace a client the following command is available:
See the available flags by calling vmq-admin trace client --help
.
A typical trace could look like the following:
In this particular trace a trace was started for the client with client-id client
. At first no clients are connected to the node where the trace has been started, but a little later the client connects and we see the trace come alive. The strange identifier <7616.3443.1>
is called a process identifier and is the identifier of the process in which the trace happened - this isn't relevant unless one wants to correlate the trace with log entries where process identifiers are also logged. Besides the process identifier there are some lines with MQTT SEND
and MQTT RECV
which are to be understood from the perspective of the broker. In the above trace this means that first the broker receives a CONNECT
frame and replies with a CONNACK
frame. Each MQTT event is annotated with the data from the MQTT frame to give as much detail and insight as possible.
Notice the auth_on_register
call between CONNECT
and CONNACK
which is the authentication plugin hook being called to authenticate the client. In this case the hook returned ok
which means the client was successfully authenticated.
Likewise notice the auth_on_subscribe
call between the SUBSCRIBE
and SUBACK
frames which is plugin hook used to authorize if this particular subscription should be allowed or not. In this case the subscription was authorized.
The client trace command has additional options as shown by vmq-admin trace client --help
. Those are hopefully self-explaining:
A convenient tool is the ts
(timestamp) tool which is available on many systems. If the trace output is piped to this command each line is prefixed with a timestamp:
ts | sudo vmq-admin trace client client-id=tester
It is currently not possible to start multiple traces from multiple shells, or trace multiple ClientIDs.
If you loose access to your shell from where you started a trace, you might need to stop that trace before you can spawn a new one. Your attempt to spawn a second trace will result in the following output:
You can stop a running trace using the stop_all
command from a second shell. This will log a message to the other shell telling that session it's being externally terminated. The calling shell will silently return and be available for a new trace.
Description and Configuration of the Graphite exporter
The graphite exporter reports the broker metrics at a fixed interval (defined in milliseconds) to a graphite server. The necessary configuration is done inside the vernemq.conf
.
You can further tune the connection to the Graphite server:
The above configuration parameters can be changed at runtime using the vmq-admin
script.
Usage: vmq-admin set = ... [[--node | -n] | --all]
Example: vmq-admin set graphite_interval=20000 graphite_port=2003 -n VerneMQ@127.0.0.1
Description and Configuration of the built-in Monitoring mechanism
VerneMQ can be monitored in several ways. We implemented native support for Graphite, MQTT $SYS tree, and Prometheus.
The metrics are also available via the command line tool:
Or with:
Which will output the metrics together with a short description describing what the metric is about. An example looks like:
Notice that the metrics:
Are no longer used (always 0) and will be removed in the future. They were replaced with mqtt_connack_sent
using the return_code
label. For MQTT 5.0 the reason_code
label is used instead.
The output on the command line are aggregated by default, but details for a label can be shown as well, for example all metrics with the not_authorized
label:
All available labels can be show using vmq-admin metrics show --help
.
Description and Configuration of the Prometheus exporter
The Prometheus exporter is enabled by default and installs an HTTP handler on http://localhost:8888/metrics
. To read more about configuring the HTTP listener, see HTTP Listener Configuration.
Add the following configuration to the scrape_configs
section inside prometheus.yml
of your Prometheus server.
This tells Prometheus to scrape the VerneMQ metrics endpoint every 5 seconds.
Please follow the documentation on the Prometheus website to properly configure the metrics scraping as well as how to access those metrics and configure alarms and graphs.
Netdata Metrics
A great way to monitor VerneMQ is to use Netdata or Netdata Cloud. Netdata uses VerneMQ in its Netdata Cloud service, and has developed full integration with VerneMQ.
This means that you have one of the best monitoring tools ready for VerneMQ. Netdata will show you all the VerneMQ metrics in a realtime dashboard.
When Netdata runs on the same node as VerneMQ it will automatically discover the VerneMQ node.
Learn how to setup Netdata for VerneMQ with the following guide:
https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/vernemq
The VerneMQ health checker
A simple way to gauge the health of a VerneMQ cluster is to query the /health
path on the HTTP listener.
The health check will return 200 when VerneMQ is accepting connections and is joined with the cluster (for clustered setups). 503 will be returned in case any of those two conditions are not met.
VerneMQ provides multiple hooks throughout the lifetime of a session. The most important ones are the auth_on_register
and auth_on_register_m5
hooks which act as an application level firewall granting or rejecting new clients.
The auth_on_register
and auth_on_register_m5
hooks allow your plugin to grant or reject new client connections. Moreover it lets you exert fine grained control over the configuration of the client session. The auth_on_register
hook is specified in the Erlang behaviour auth_on_register_hook and the auth_on_register_m5
hook in the auth_on_register_m5_hook behaviour available in the vernemq_dev repo.
Every plugin that implements the auth_on_register
or auth_on_register_m5
hooks are part of a conditional plugin chain. For this reason we allow the hook to return different values depending on how the plugin grants or rejects this client. In case the plugin doesn't know the client it is best to return next
as this would allow subsequent plugins in the chain to validate this client. If no plugin is able to validate the client it gets automatically rejected.
The on_auth_m5
hook allows your plugin to implement MQTT enhanced authentication, see Enhanced Authentication Flow.
The on_register
and on_register_m5
hooks allow your plugin to get informed about a newly authenticated client. The hook is specified in the Erlang behaviour on_register_hook and the on_register_m5_hook behaviour available in the vernemq_dev repo.
Once a new client was successfully authenticated and the above described hooks have been called, the client attaches to its queue. If it is a returning client using clean_session=false
or if the client had previous sessions in the cluster, this process could take a while. (As offline messages are migrated to a new node, existing sessions are disconnected). The on_client_wakeup hook is called at the point where a queue has been successfully instantiated, possible offline messages migrated, and potential duplicate sessions have been disconnected. In other words: when the client has reached a completely initialized, normal state for accepting messages. The hook is specified in the Erlang behaviour on_client_wakeup_hook
available in the vernemq_dev repo.
This hook is called if an MQTT 3.1/3.1.1 client using clean_session=false
or an MQTT 5.0 client with a non-zero session_expiry_interval
closes the connection or gets disconnected by a duplicate client. The hook is specified in the Erlang behaviour on_client_offline_hook available in the vernemq_dev repo.
This hook is called if an MQTT 3.1/3.1.1 client using clean_session=true
or an MQTT 5.0 client with the session_expiry_interval
set to zero closes the connection or gets disconnected by a duplicate client. The hook is specified in the Erlang behaviour on_client_gone_hook available in the vernemq_dev repo.
In this section the publish flow is described. VerneMQ provides multiple hooks throughout the flow of a message. The most important ones are the auth_on_publish
and auth_on_publish_m5
hooks which acts as an application level firewall granting or rejecting a publish message.
The auth_on_publish
and auth_on_publish_m5
hooks allow your plugin to grant or reject publish requests sent by a client. It also enables to rewrite the publish topic, payload, qos, or retain flag and in the case of auth_on_publish_m5
properties. The auth_on_publish
hook is specified in the Erlang behaviour auth_on_publish_hook and the auth_on_publish_m5
hook in the auth_on_publish_m5_hook behaviour available in the vernemq_dev repo.
Every plugin that implements the auth_on_publish
or auth_on_publish_m5
hooks are part of a conditional plugin chain. For this reason we allow the hook to return different values. In case the plugin can't validate the publish message it is best to return next
as this would allow subsequent plugins in the chain to validate the request. If no plugin is able to validate the request it gets automatically rejected.
The on_publish
and on_publish_m5
hooks allow your plugin to get informed about an authorized publish message. The hook is specified in the Erlang behaviour on_publish_hook and the on_publish_m5
hook in the on_publish_m5_hook behaviour available in the vernemq_dev repo.
The on_offline_message
hook allows your plugin to get notified about a new a queued message for a client that is currently offline. The hook is specified in the Erlang behaviour on_offline_message_hook available in the vernemq_dev repo.
The on_deliver
and on_deliver_m5
hooks allow your plugin to get informed about outgoing publish messages, but also allows you to rewrite topic and payload of the outgoing message. The hook is specified in the Erlang behaviour on_deliver_hook and the on_deliver_m5
hook in the on_deliver_m5_hook behaviour available in the vernemq_dev repo.
Every plugin that implements the on_deliver
or on_deliver_m5
hooks are part of a conditional plugin chain, although NO verdict is required in this case. The message gets delivered in any case. If your plugin uses this hook to rewrite the message the plugin system stops evaluating subsequent plugins in the chain.
The VerneMQ Status Page
VerneMQ comes with a built-in Status Page that is enabled by default and is available on http://localhost:8888/status
, see HTTP listeners.
The Status Page is a simple overview of the cluster and the individual nodes in the cluster as seen below. Note that while the Status Page is running on each node of the cluster, it's enough to look at one of them to get a quick status of your cluster.
The Status Page has the following sections:
Issues (Warnings on netsplits, etc)
Cluster Overview
Node Status
The Status Page will automatically refresh itself every 10 seconds, and try to calculate rates in Javascript, based on that reload window. Therefore, the displayed rates might be slightly inaccurate. The Status Page should not be considered a replacement for a metrics system. Running in production, you certainly want to hook up VerneMQ to a metrics system like Prometheus.
VerneMQ supports flows or SASL style authentication for MQTT 5.0 sessions. The enhanced authentication mechanism can be used for initial authentication when the client connects or to re-authenticate clients at a later point.
The on_auth_m5
hook allows the plugin to implement SASL style authentication flows by either accepting, rejecting (disconnecting the client) or continue the flow. The on_auth_m5
hook is specified in the Erlang behaviour in the repo.
We recommend to use the rebar3
toolchain to generate the basic Erlang OTP application boilerplate and start from there.
Change the rebar.config
file to include the vernemq_dev
dependency:
Compile the application, this will automatically fetch vernemq_dev
.
Now you're ready to implement the hooks. Don't forget to add the proper vmq_plugin_hooks
entries to your src/myplugin.app.src
file.
Loadtesting VerneMQ with vmq_mzbench
You can loadtest VerneMQ with our . It is based on Machinezone's very powerful and lets you narrow down what hardware specs are needed to meet your performance goals. You can state your requirements for latency percentiles (and much more) in a formal way, and let vmq_mzbench automatically fail, if it can't meet the requirements.
If you have an AWS account, vmq_mzbench can automagically provision worker nodes for you. You can also run it locally, of course.
Please follow the
Actually, you don't even have to install vmq_mzbench, if you don't want to. Your scenario file will automatically fetch vmq_mzbench for any test you do. vmq_mzbench runs every test independently, so it has a provisioning step for any test, even if you only run it on a local worker.
In case you still want to have `vmq_mzbench on your local machine, go through the following steps:
To provision your tests from this local repository, you'll have to tell the scenario scripts to use rsync. Add this to the scenario file:
If you'd just like the script itself fetch vmq_mzbench, then you can direct it to github:
There's not much to learn, just make sure you understand how pools and loops work. Then you can add the vmq_mzbench statement functions to the mix and define actual loadtest scenarios.
Here's a list of the most important vmq_mzbench statement functions you can use in MQTT scenario files:
random_client_id(State, Meta, I)
: Create a random client Id of length I
fixed_client_id(State, Meta, Name, Id)
: Create a deterministic client Id with schema Name ++ "-" ++ Id
worker_id(State, Meta)
: Get the internal, sequential worker Id
client(State, Meta)
: Get the client Id you set yourself during connection setup with the option {t, client, "client"}
connect(State, Meta, ConnectOpts)
: Connect to the broker with the options given in ConnectOpts
disconnect(State, Meta)
: Disconnect normally
subscribe(State, Meta, Topic, QoS)
: Subscribe to Topic with Quality of Service QoS
subscribe_to_self(State, _Meta, TopicPrefix, Qos)
: Subscribe to an exclusive topic, for 1:1 testing
unsubscribe(State, Meta, Topic)
: Unubscribe from Topic
publish(State, Meta, Topic, Payload, QoS)
: Publish a message with binary Payload to Topic with QoS
publish(State, Meta, Topic, Payload, QoS, RetainFlag)
: Publish a message with binary Payload to Topic with QoS and RetainFlag
publish_to_self(State, Meta, TopicPrefix, Payload, Qos)
: -> Publish a payload to an exclusive topic, for 1:1 testing
Learn how to implement VerneMQ Plugins for customizing many aspects of how VerneMQ deals with client connections, subscriptions, and message flows.
VerneMQ is implemented in and therefore runs on top of the Erlang VM. For this reason native plugins have to be developed in a programming language that runs on the Erlang VM. The most popular choice is obviously the Erlang programming language itself, but Elixir or Lisp flavoured Erlang LFE could be used too. That said, all the plugin hooks are also exposed over (a subset of) Lua, and over WebHooks. This allows you to implement a VerneMQ plugin, by simply implementing a WebHook endpoint, using any programming language you like. You can also implement a VerneMQ plugin as a Lua script.
Be aware that in VerneMQ a plugin does NOT run in a sandboxed environment and misbehaviour could seriously harm the system (e.g. performance degradation, reduced availability as well as consistency, and message loss). Get in touch with us in case you require a review of your plugin.
This guide explains the different flows that expose different hooks to be used for custom plugins. It also describes the code structure a plugin must comply to in order to be successfully loaded and started by the VerneMQ plugin mechanism.
All the hooks that are currently exposed fall into one of three categories.
Hooks that allow you to change the protocol flow. An example could be to authenticate a client using the auth_on_register
hook.
Hooks that inform you about a certain action, that could be used for example to implement a custom logging or audit plugin.
Hooks that are called given a certain condition
Notice that some hooks come in two variants, for example the auth_on_register
and then auth_on_register_m5
hooks. The _m5
postfix refers to the fact that this hook is only invoked in an MQTT 5.0 session context whereas the other is invoked in a MQTT 3.1/3.1.1 session context.
Before going into the details, let's give a quick intro to the VerneMQ plugin system.
The VerneMQ plugin system allows you to load, unload, start and stop plugins during runtime, and you can even upgrade a plugin during runtime. To make this work it is required that the plugin is an OTP application and strictly follows the rules of implementing the Erlang OTP application behaviour. It is recommended to use the rebar3
toolchain to compile the plugin. VerneMQ comes with built-in support for the directory structure used by rebar3
.
Every plugin has to describe the hooks it is implementing as part of its application environment file. The vmq_acl
plugin for instance comes with the application environment file below:
Lines 6 to 10 instruct the plugin system to ensure that those dependent applications are loaded and started. If you're using third party dependencies make sure that they are available in compiled form and part of the plugin load path. Lines 16 to 20 allow the plugin system to compile the plugin rules. Yes, you've heard correctly. The rules are compiled into Erlang VM code to make sure the lookup and execution of plugin code is as fast as possible. Some hooks exist which are used internally such as the change_config/1
, we'll describe those at some other point.
The environment value for vmq_plugin_hooks
is a list of hooks. A hook is specified by {Module, Function, Arity, Options}
.
To streamline the plugin development we provide a different Erlang behaviour for every hook a plugin implements. Those behaviours are part of the vernemq_dev
library application, which you should add as a dependency to your plugin. vernemq_dev
also comes with a header file that contains all the type definitions used by the hooks.
It is possible to have multiple plugins serving the same hook. Depending on the hook the plugin chain is used differently. The most elaborate chains can be found for the hooks that deal with authentication and authorization flows. We also call them conditional chains as a plugin can give control away to the next plugin in the chain. The image show a sample plugin chain for the auth_on_register
hook.
Most hooks don't require conditions and are mainly used as event handlers. In this case all plugins in a chain are called. An example for such a hook would be the on_register
hook.
A rather specific case is the need to call only one plugin instead of iterating through the whole chain. VerneMQ uses such hooks for it's pluggable message storage system.
Unless you're implementing your custom message storage backend, you probably won't need this style of hook.
The position in the plugin call chain is currently implicitly given by the order the plugins have been started.
The plugin mechanism uses the application environment file to infer the applications that it has to load and start prior to starting the plugin itself. It internally uses the application:ensure_all_started/1
function call to start the plugin. If your setup is more complex you could override this behaviour by implementing a custom start/0
function inside a module that's named after your plugin.
The plugin mechanism uses application:stop/1
to stop and unload the plugin. This won't stop the dependent application started at startup. If you rely on third party applications that aren't started as part of the VerneMQ release, e.g. a database driver, you can implement a custom stop/0
function inside a module that's named after your plugin and properly stop the driver there.
The vmq_types.hrl
exposes all the type specs used by the hooks. The following types are used by the plugin system:
In this section the subscription flow is described. VerneMQ provides several hooks to intercept the subscription flow. The most important ones are the auth_on_subscribe
and auth_on_subscribe_m5
hooks which act as an application level firewall granting or rejecting subscribe requests.
In the following we describe how a typical VerneMQ deployment can look and some of the concerns one have to take into account when designing such a system.
A typical VerneMQ deployment could from a high level look like the following:
In this scenario MQTT clients connect from the internet and are authenticated and authorized against the Authentication Management Service and publish and receive messages, either with each other or with the Backend-Services which might be responsible for sending control messages to the clients or storing and forwarding messages to other systems or databases for later processing.
To build and deploy a system such as the above a lot of decisions has to be made. These can concern how to do authentication and authorization, where to do TLS termination, how the load balancer should be configured (if one is needed at all), what the MQTT architecture and topic trees should look and how and to what level the system can/should scale. To simplify the following discussion we'll set a few requirements:
Clients connecting from the internet are using TLS client certificates
The messaging pattern is largely fan-in: The clients continuously publish a lot of messages to a set of topics which have to be handled by the Backend-Services.
The client sessions are persistent, which means the broker will store QoS 1 & 2 messages routed to the clients while the clients are offline.
In the following we'll cover some of these options and concerns.
Often a load balancer is deployed between MQTT clients and the VerneMQ cluster. One of the main purposes of the load balancer is to ensure that client connections are distributed between the VerneMQ nodes so each node has the same amount of connections. Usually a load balancer provides different load balancing strategies for deciding how to select the node where it should route an incoming connection. Examples of these are random, source hashing (based on source IP) or even protocol-aware balancing based on for example the MQTT client-id. The last two are examples of sticky balancing or session affine strategies where a client will always be routed to the same cluster node as long as the source IP or client-id remains the same.
When using a load balancer the client is no longer directly connected to the VerneMQ nodes and therefore the peer port and IP-address VerneMQ sees is therefore not that of the client, but of the load balancer. The peer information is often important for logging reasons or if a plugin checks it up against a white/black list.
Often if client certificates are used to verify and authenticate the clients. VerneMQ makes it possible to make the client certificate common name (CN) available for the authentication plugin system by overriding the MQTT username with the CN, before authentication is performed. If TLS is terminated at the load balancer then the PROXY Protocol would be used This works for both if TLS is terminated in a load balancer or if TLS is terminated directly in VerneMQ. In case TLS is terminated at the load balancer then the listener can be configured as follows to achieve this effect:
If TLS is terminated directly in VerneMQ the PROXY protocol isn't needed as the TLS client certificate is directly available in VerneMQ and the CN can be used to instead of the username by setting:
Another important aspect of running a VerneMQ is having proper monitoring and alerting in place. All the usual things should be monitored at the OS level such as memory and cpu usage and alerts should be put in place to actions can be taken if a disk is filling up or a VerneMQ node is starting to use too much CPU. VerneMQ exports a large number of metrics and depending on the use case these can be used as important indicators that the system is running
When designing a system like the one described here, there are a number of things to consider in order to get the best performance out of the available resources.
As mentioned earlier clients in this scenario are using persistent sessions. In VerneMQ a persistent session exists only on the VerneMQ node where the client connected. This implies that if the client using a persistent session later reconnects to another node, then the session, including any offline messages, will be moved to the new node. This has a certain overhead and can be avoided if the load balancer in front of VerneMQ is using a session affine load balancing strategy such as IP source hashing to assign the client connecting to a node. Of course this strategy isn't perfect if clients often change their IP addresses, but for most cases it is a huge improvement over a random load balancing strategy.
An important guideline in protecting a VerneMQ cluster from overload is to allow only what is necessary. This means having and enforcing sensible authentication and authorization rules as well as configuring conservatively so resources cannot be exhausted due to human error or MQTT clients that have turned mailicious. For example in VerneMQ it is possible to specify how many offline messages a persistent session can maximally hold via the max_offline_messages
setting - and it should then be set to the lowest acceptable value which works for all clients and/or use a plugin which is able to override such settings on a per-client basis. The load balancer can also play an important role in protecting the system in that it can control the connect rates as well as imposing bandwith restrictions on clients.
How to implement VerneMQ plugins using a HTTP interface
The VerneMQ Webhooks plugin provides an easy and flexible way to build powerful plugins for VerneMQ using web hooks. With VerneMQ Webhooks you are free to select the implementation language to match your technical requirements or the language in which you feel comfortable and productive in. You can use any modern language such as Python, Go, C#/.Net and indeed any language in which you can build something that can handle HTTP requests.
The idea of VerneMQ Webhooks very simple: you can register an HTTP endpoint with a VerneMQ plugin hook and whenever the hook (such as auth_on_register
) is called, the VerneMQ Webhooks plugin dispatches a HTTP post request to the registered endpoint. The HTTP post request contains a HTTP header like vernemq-hook: auth_on_register
and a JSON encoded payload. The endpoint then responds with code 200 on success and with a JSON encoded payload informing the VerneMQ Webhooks plugin which action to take (if any).
To enable webhooks make sure to set:
And then each webhook can be configured like this:
It is possible to have the webhooks plugin omit sending the payload for the and webhooks by setting the no_payload
config:
It is also possible to dynamically register webhooks at run-time:
See which endpoints are registered:
And finally deregistering an endpoint:
We recommend placing the endpoint implementation locally on each VerneMQ node such that each request can go over localhost without being subject to network issues. Also note that currently VerneMQ Webhooks does not encrypt requests in any way or use HTTPS, so care should be taken if the endpoints are made reachable over the network.
Each registered hook uses by default a connection pool containing maximally 100 connections. This can be changed by setting vmq_webhooks.pool_max_connections
to a different value. Similarly the vmq_webhooks.pool_timeout
configuration (value is in milliseconds) can be set to control how long an unused connection should stay in the connection pool before being closed and removed. The default value is 60000 (60 seconds).
These options are available in VerneMQ 1.4.0.
VerneMQ webhooks support caching of the auth_on_register
, auth_on_publish
and auth_on_subscribe
hooks.
This can be used to speed up authentication and authorization tremendously. All data passed to these hooks is used to look if the call is in the cache, except in the case of the auth_on_publish
where the payload is omitted.
To enable caching for an endpoint simply return the cache-control: max-age=AgeInSeconds
in the response headers to one of the mentioned hooks. If the call was successful (authentication granted), the request will be cached together with any modifiers, except for the payload
modifier in the auth_on_publish
hook.
Whenever a non-expired entry is looked up in the cache the endpoint will not be called and the modifiers of the cached entry will be returned, if any.
It is possible to inspect the cache using:
Cache entries are currently not actively disposed after expiry and will remain in memory.
All webhooks are called with method POST
. All hooks need to be answered with the HTTP code 200
to be considered successful. Any hook called that does not return the 200
code will be logged as an error as will any hook with an unparseable payload.
All hooks are called with the header vernemq-hook
which contains the name of the hook in question.
Note, when overriding a mountpoint or a client-id both have to be returned by the webhook implementation for it to have an effect.
Header: vernemq-hook: auth_on_register
Webhook example payload:
A minimal response indicating the authentication was successful looks like:
It is also possible to override various client specific settings by returning an array of modifiers:
Other possible responses:
Header: vernemq-hook: auth_on_subscribe
Webhook example payload:
A minimal response indicating the subscription authorization was successful looks like:
Another example where where the topics to subscribe have been rewritten looks like:
Note, you can also pass a qos
with value 128
which means it was either not possible or the client was not allowed to subscribe to that specific question.
Other possible responses:
Header: vernemq-hook: auth_on_publish
Note, in the example below the payload is not base64 encoded which is not the default.
Webhook example payload:
A minimal response indicating the publish was authorized looks like:
A more complex example where the publish topic, qos, payload and retain flag is rewritten looks like:
Other possible responses:
Header: vernemq-hook: on_register
Webhook example payload:
The response should be an empty json object {}
.
Header: vernemq-hook: on_publish
Note, in the example below the payload is not base64 encoded which is not the default.
Webhook example payload:
The response should be an empty json object {}
.
Header: vernemq-hook: on_subscribe
Webhook example payload:
The response should be an empty json object {}
.
Header: vernemq-hook: on_unsubscribe
Webhook example payload:
Example response:
Other possible responses:
Header: vernemq-hook: on_deliver
Note, in the example below the payload is not base64 encoded which is not the default.
Webhook example payload:
Example response:
Other possible responses:
Header: vernemq-hook: on_offline_message
Note, in the example below the payload is not base64 encoded which is not the default.
Webhook example payload:
The response should be an empty json object {}
.
Header: vernemq-hook: on_client_wakeup
Webhook example payload:
The response should be an empty json object {}
.
Header: vernemq-hook: on_client_offline
Webhook example payload:
The response should be an empty json object {}
.
Header: vernemq-hook: on_client_gone
Webhook example payload:
The response should be an empty json object {}
.
Header: vernemq-hook: auth_on_register_m5
Webhook example payload:
A minimal response indicating the authentication was successful looks like:
It is also possible to override various client specific settings by returning an array of modifiers:
Other possible responses:
Header vernemq-hook: on_auth_m5
Webhook example payload:
Note, as the authentication data is binary data it is base64 encoded.
A minimal response indicating the authentication was successful looks like:
Header: vernemq-hook: auth_on_subscribe_m5
Webhook example payload:
A minimal response indicating the subscription authorization was successful looks like:
Another example where where the topics to subscribe have been rewritten looks like:
Note, the forbidden/topic
has been rejected with the qos
value of 135 (Not authorized).
Other responses
Header: vernemq-hook: auth_on_publish_m5
Note, in the example below the payload is not base64 encoded which is not the default.
Webhook example payload:
A minimal response indicating the publish was authorized looks like:
A response where the publish topic has been rewritten:
Other possible responses:
Header: vernemq-hook: on_register_m5
Webhook example payload:
The response should be an empty json object {}
.
Header: vernemq-hook: on_publish_m5
Note, in the example below the payload is base64 encoded .
Webhook example payload:
The response should be an empty json object {}
.
Header: vernemq-hook: on_subscribe_m5
Webhook example payload:
Note, the qos value of 128
(Unspecified error) means the subscription was rejected.
The response should be an empty json object {}
.
Header: vernemq-hook: on_unsubscribe_m5
Webhook example payload:
Example response:
Other possible responses:
Header: vernemq-hook: on_deliver_m5
Note, in the example below the payload is not base64 encoded which is not the default.
Webhook example payload:
Example response:
Other possible responses:
Below is a very simple example of an endpoint implemented in Python. It uses the web
and json
modules and implements handlers for three different hooks: auth_on_register
, auth_on_publish
and auth_on_subscribe
.
The auth_on_register
hook only restricts access only to the user with username joe
and password secret
. The auth_on_subscribe
and auth_on_publish
hooks allow any subscription or publish to continue as is. These last two hooks are needed as the default policy is deny
.
You need to know about and configure a couple of Operating System and Erlang VM configs to operate VerneMQ efficiently. First, make sure you have set appropriate OS file limits according to our . Second, when you run into performance problems, don't forget to check the . (Can't open more than 10k connections? Well, is the listener configured to open more than 10k?)
This is the number one topic to look at, if you need to keep an eye on RAM usage.
Context: All network I/O in Erlang uses an internal driver. This driver will allocate and handle an internal application side buffer for every TCP connection. The default size of these buffers will determine your overall RAM use.
VerneMQ calculates the buffer size from the OS level TCP send and receive buffers:
val(buffer) >= max(val(sndbuf),val(recbuf))
Those values correspond to net.ipv4.tcp_wmem
and net.ipv4.tcp_rmem
in your OS's sysctl configuration. One way to minimize RAM usage is therefore to configure those settings (Debian example):
This would result in a 32KB application buffer for every connection. On a multi-purpose server where you install VerneMQ as a test, you might not want to change your OS's TCP settings, of course. In that case, you can still configure the buffer sizes manually for VerneMQ by using the advanced.config
file.
The advanced.config
file is a supplementary configuration file that sits in the same directory as the vernemq.conf
. You can set additional config values for any of the OTP applications that are part of a VerneMQ release. To just configure the TCP buffer size manually, you can create an advanced.config
file:
For very advanced & custom configurations, you can add a vm.args
file to the same directory where the vernemq.conf
file is located. Its purpose is to configure parameters for the Erlang Virtual Machine. This will override any Erlang specific parameters your might have configured via the vernemq.conf
. Normally, VerneMQ auto-generates a vm.args file for every boot in /var/lib/vernemq/generated.configs/
(Debian package example) from vernemq.conf
and other potential configuration sources.
A manually generated vm.args
is not supplementary, it is a full replacement of the auto-generated file! Keep that in mind. An easy way to go about this, is by copying and extending the auto-generated file.
This is how a vm.args
might look like:
Using TLS will of course increase the CPU load during connection setup. Latencies in message delivery will be increased, and your overall message throughput per second will be lower.
TLS will require considerably more RAM. Instead of 2 Erlang processes per connection, TLS will use 3. You'll have a session process, a queue process, and a TLS handler process that can encapsulate quite a big state (> 30KB).
Erlang/OTP uses its own TLS implementation, only using OpenSSL for crypto, but not connection handling. For situations with high connection setup rate or overall high connection churn rate, the Erlang TLS implementation might be too slow. On the other hand, Erlang TLS gives you great concurrency & fault isolation for long-lived connections.
Some Erlang deployments terminate SSL/TLS with an external component or with a load balancer component. Do some testing & try to find out what works best for you.
The Erlang TLS implementation is rather picky on certificate chains & formats. Don't give up, if you encounter errors first. On Linux, you can find out more with the openssl s_client
command quickly.
For a complete example, see the .
MZBench recently switched from an Erlang-styled Scenario DSL to a more python-like DSL dubbed BDL (Benchmark Definition Language). Have a look at the on Github.
You can familiarize yourself quickly with on writing loadtest scenarios.
It's easy to add more statement functions to the MQTT worker if needed. For a full list of the exported statement functions, we encourage you to have a look at the code directly.
The auth_on_subscribe
and auth_on_subscribe_m5
hooks allow your plugin to grant or reject subscribe requests sent by a client. They also makes it possible to rewrite the subscribe topic and qos. The auth_on_subscribe
hook is specified in the Erlang behaviour and the auth_on_subscribe
hook in the behaviour available in the repo.
The on_subscribe
and on_subscribe_m5
hooks allow your plugin to get informed about an authorized subscribe request. The on_subscribe
hook is specified in the Erlang behaviour and the on_subscribe_m5
hook in the behaviour available in the repo.
The on_unsubscribe
and on_unsubscribe_m5
hooks allow your plugin to get informed about an unsubscribe request. They also allow you to rewrite the unsubscribe topic if required. The on_subscribe
hook is specified in the Erlang behaviour and the on_unsubscribe_m5
hook in the behaviour available in the repo.
To solve this problem VerneMQ supports the v1 and v2 which is designed to transport connection information across proxies. See how to enable the proxy protocol for an MQTT listener. In case TLS is terminated at the load balancer and client certificates are used PROXY Protocol (v2) will also take care of forwarding TLS client certificate details.
See the details in the section.
The actual authentication can then be handled by an authentication and authorization plugin like which supports , , , and as backends for storing credentials and ACL rules.
In many systems the MQTT clients provide a lot of data by periodically broadcasting data to the MQTT cluster. The amount of published messages can very easily become hard to manage for a single MQTT client. Further using normal MQTT subscriptions all subscribers would receive the same messages, so adding more subscribers to a topic doesn't help handling the amount of messages. To solve this VerneMQ implements a concept called which makes it possible to distribute MQTT messages published to a topic over several MQTT clients. In this specific scenario this would mean the Backend-Services would consist of a set of clients subscribing to cluster nodes using shared subscriptions.
To avoid expensive intra-node communication, VerneMQ shared subscriptions support a policy called local_only
which means that messages being will be delivered to shared subscribers on the local node only and not forwarded to shared subscribers on other nodes in the cluster. With this policy messages for the backend-services can be delivered in the fastest and most expedient manner with the lowest overhead. See the documentation for more information.
Controlling TCP buffer sizes is important in ensuring optimal memory usage. The rule is that the more bandwith or the lower latency required, the larger the TCP buffer sizes should be. Many IoT communicate with a very low bandwith and as such the server side TCP buffer sizes for these does not need to be very large. On the other hand, in this scenario the consumers handling the fan-ins in the Bacend-Services will have many (thousands or tens of thousands of messages per second) and they can benefit from larger TCP buffer sizes. Read more about tuning TCP buffer sizes .
Somehow a system like this has to be deployed. How to do this will not be covered here, bit it is certainly possible to deploy VerneMQ using tools like, or or use container solutions such as Kubernetes. For more information on how to deploy VerneMQ on Kubernetes check out our guide: .
For detailed information about the hooks and when they are called, see the sections , and .
Note, the retry_interval
is in milli-seconds. It is possible to override many more settings, see the for more information.
Note, the retry_interval
is in milli-seconds. It is possible to override many more settings, see the for more information.
If authentication were to continue for another round a reason code with value 24 (Continue Authentication) should be returned instead. See also the relevant in the MQTT 5.0 specification.
Loadtesting VerneMQ with vmq_mzbench
You can loadtest VerneMQ with our vmq_mzbench tool. It is based on Machinezone's very powerful MZBench system and lets you narrow down what hardware specs are needed to meet your performance goals. You can state your requirements for latency percentiles (and much more) in a formal way, and let vmq_mzbench automatically fail, if it can't meet the requirements.
If you have an AWS account, vmq_mzbench can automagically provision worker nodes for you. You can also run it locally, of course.
Please follow the MZBench installation guide
Actually, you don't even have to install vmq_mzbench, if you don't want to. Your scenario file will automatically fetch vmq_mzbench for any test you do. vmq_mzbench runs every test independently, so it has a provisioning step for any test, even if you only run it on a local worker.
To install vmq_mzbench on your computer, go through the following steps:
To provision your tests from this local repository, you'll have to tell the scenario scripts to use rsync. Add this to the scenario file:
If you'd just like the script itself fetch vmq_mzbench, then you can direct it to github:
MZBench recently switched from an Erlang-styled Scenario DSL to a more python-like DSL dubbed BDL (Benchmark Definition Language). Have a look at the BDL examples on Github.
You can familiarize yourself quickly with MZBench's guide on writing loadtest scenarios.
There's not much to learn, just make sure you understand how pools and loops work. Then you can add the vmq_mzbench statement functions to the mix and define actual loadtest scenarios.
Currently vmq_mzbench exposes the following statement functions for use in MQTT scenario files:
random_client_id(State, Meta, I)
: Create a random client Id of length I
fixed_client_id(State, Meta, Name, Id)
: Create a deterministic client Id with schema Name ++ "-" ++ Id
worker_id(State, Meta)
: Get the internal, sequential worker Id
client(State, Meta)
: Get the client Id you set yourself during connection setup with the option {t, client, "client"}
connect(State, Meta, ConnectOpts)
: Connect to the broker with the options given in ConnectOpts
disconnect(State, Meta)
: Disconnect normally
subscribe(State, Meta, Topic, QoS)
: Subscribe to Topic with Quality of Service QoS
unsubscribe(State, Meta, Topic)
: Unubscribe from Topic
publish(State, Meta, Topic, Payload, QoS)
: Publish a message with binary Payload to Topic with QoS
publish(State, Meta, Topic, Payload, QoS, RetainFlag)
: Publish a message with binary Payload to Topic with QoS and RetainFlag
It's easy to add more statement functions to the MQTT worker if needed, get in touch with us.
How to change the open file limits
VerneMQ can consume a large number of open file handles when thousands of clients are connected as every connection requires at least one file handle.
Most operating systems can change the open-files limit using the ulimit -n
command. Example:
However, this only changes the limit for the current shell session. Changing the limit on a system-wide, permanent basis varies more between systems.
What will actually happen when VerneMQ runs out of OS-side file descriptors?
In short, VerneMQ will be unable to function properly, because it can't open database files or accept incoming connections. In case you see exceptions with {error,emfile}
in the VerneMQ log files, you now know what to do, though: increase the OS settings as described below.
On most Linux distributions, the total limit for open files is controlled by sysctl
.
An alternative way to read the file-max
settings is:
This might be high enough for your VerneMQ deployment, or not - we cannot know that. You will need at least 1 file descriptor per TCP connection, and VerneMQ needs additional file descriptors for file access etc. Also, if you have other components running on the system, you might want to consult the sysctl manpage manpage for how to change that setting. The fs.file-max
setting represents the global maximum of file handlers a Linux kernel will allocate. Make sure this is high enough for your system.
Once you're good regarding file-max
, you still need to configure the per-process open files limit. You'll set the number of file descriptors a single process or application like VerneMQ is allowed to grab. As every process belongs to a user, you need to bind the setting to a Linux user (here, the vernemq
user). To do this, edit /etc/security/limits.conf
, for which you'll need superuser access. If you installed VerneMQ from a binary package, add lines for the vernemq
user, substituting your desired hard and soft limits:
On Ubuntu, if you’re always relying on the init scripts to start VerneMQ, you can create the file /etc/default/vernemq and specify a manual limit:
This file is automatically sourced from the init script, and the VerneMQ process started by it will properly inherit this setting. As init scripts are always run as the root user, there’s no need to specifically set limits in /etc/security/limits.conf
if you’re solely relying on init scripts.
On CentOS/RedHat systems, make sure to set a proper limit for the user you’re usually logging in with to do any kind of work on the machine, including managing VerneMQ. On CentOS, sudo
properly inherits the values from the executing user.
Newer VerneMQ packages use a systemd service file. You can adapt the LimitNOFILE
setting in the vernemq.service
file to the value you need. It is set to infinity
by default already, so you only need to adapt it in case you want a lower value. The reason we need to enforce the setting is that systemd doesn't automatically take over the nofile
settings from the OS.
It can be helpful to enable PAM user limits so that non-root users, such as the vernemq
user, may specify a higher value for maximum open files. For example, follow these steps to enable PAM user limits and set the soft and hard values for all users of the system to allow for up to 65536 open files.
Edit /etc/pam.d/common-session
and append the following line:
If /etc/pam.d/common-session-noninteractive
exists, append the same line as above.
Save and close the file.
Edit /etc/security/limits.conf
and append the following lines to the file:
Save and close the file.
(optional) If you will be accessing the VerneMQ nodes via secure shell (ssh), you should also edit /etc/ssh/sshd_config
and uncomment the following line:
and set its value to yes
as shown here:
Restart the machine so that the limits to take effect and verify
that the new limits are set with the following command:
Edit /etc/security/limits.conf
and append the following lines to
the file:
Save and close the file.
Restart the machine so that the limits to take effect and verify that the new limits are set with the following command:
In the above examples, the open files limit is raised for all users of the system. If you prefer, the limit can be specified for the vernemq
user only by substituting the two asterisks (*) in the examples with vernemq
.
In Solaris 8, there is a default limit of 1024 file descriptors per process. In Solaris 9, the default limit was raised to 65536. To increase the per-process limit on Solaris, add the following line to /etc/system
:
Reference:
To check the current limits on your Mac OS X system, run:
The last two columns are the soft and hard limits, respectively.
To adjust the maximum open file limits in OS X 10.7 (Lion) or newer, edit /etc/launchd.conf
and increase the limits for both values as appropriate.
For example, to set the soft limit to 16384 files, and the hard limit to 32768 files, perform the following steps:
Verify current limits:
The response output should look something like this:
Edit (or create) /etc/launchd.conf
and increase the limits. Add lines that look like the following (using values appropriate to your environment):
Save the file, and restart the system for the new limits to take effect. After restarting, verify the new limits with the launchctl limit command:
The response output should look something like this:
Attributions
This work, "Open File Limits", is a derivative of Open File Limits by Riak, used under Creative Commons Attribution 3.0 Unported License. "Open File Limits" is licensed under Creative Commons Attribution 3.0 Unported License by Erlio GmbH.
Learn how to implement VerneMQ plugins using the Lua Scripting Language.
Developing VerneMQ plugins in Erlang is the most powerful way to extend the functionality of a VerneMQ broker but might be a barrier for developers not familiar with Erlang. For this reason, we've implemented a VerneMQ extension that allows you to develop plugins using the Lua scripting language. This extension is called vmq_diversity and is shipped as part of VerneMQ.
vmq_diversity uses the Luerl Project, which is an implementation of Lua 5.2 in pure Erlang instead of the official Lua interpreter.
Moreover vmq_diversity provides simple Lua libraries to communicate with MySQL, PostgreSQL, MongoDB, and Redis within your Lua VerneMQ plugins. An additional Json encoding/decoding library as well as a generic HTTP client library provide your Lua scripts a great way to talk to external services.
To enable vmq_diversity
make sure to set:
To specify a script to load when VerneMQ starts can be done like this:
It is also possible to dynamically load a Lua script using vmq-admin
:
To reload a script after a change:
If the vmq_diversity
plugin is enabled the folder ./share/lua
folder is scanned for Lua scripts to automatically load during startup. The automatic load folder can be configured in the vernemq.conf
file by changing the vmq_diversity.script
setting.
A VerneMQ plugin typically consists of one or more implemented VerneMQ hooks. We tried to keep the differences between the traditional Erlang based and Lua based plugins as small as possible. Please check out the Plugin Development Guide for more information about the different flows and a description of the different hooks.
Let's start with a first very basic example that implements a basic authentication and authorization scheme.
It is also possible to try the next plugin in the chain (see: Chaining) by returning next
instead of false
.
This subsection describes the data providers currently available to a Lua script. Every data provider is backed by a connection pool that has to be configured by your script.
ensure_pool
Ensures that the connection pool named config.pool_id
is setup in the system. The config
argument is a Lua table holding the following keys:
pool_id
: Name of the connection pool (mandatory).
size
: Size of the connection pool (default is 5).
user
: MySQL account name for login
password
: MySQL account password for login (in clear text).
host
: Host name for the MySQL server (default is localhost)
port
: Port that the MySQL server is listening on (default is 3306).
database
: MySQL database name.
encoding
: Encoding (default is latin1)
This call throws a badarg error in case it cannot setup the pool otherwise it returns true
.
execute
Executes the provided SQL statement using a connection from the connection pool.
pool_id
: Name of the connection pool to use for this statement.
stmt
: A valid MySQL statement.
args...
: A variable number of arguments can be passed to substitute statement parameters.
Depending on the statement this call returns true
or false
or a Lua array containing the resulting rows (as Lua tables). In case the statement cannot be executed a badarg error is thrown.
ensure_pool
Ensures that the connection pool named config.pool_id
is setup in the system. The config
argument is a Lua table holding the following keys:
pool_id
: Name of the connection pool (mandatory).
size
: Size of the connection pool (default is 5).
user
: Postgres account name for login
password
: Postgres account password for login (in clear text).
host
: Host name for the Postgres server (default is localhost)
port
: Port that the Postgres server is listening on (default is 5432).
database
: Postgres database name.
This call throws a badarg error in case it cannot setup the pool otherwise it returns true
.
execute
Executes the provided SQL statement using a connection from the connection pool.
pool_id
: Name of the connection pool to use for this statement.
stmt
: A valid MySQL statement.
args...
: A variable number of arguments can be passed to substitute statement parameters.
Depending on the statement this call returns true
or false
or a Lua array containing the resulting rows (as Lua tables). In case the statement cannot be executed a badarg error is thrown.
ensure_pool
Ensures that the connection pool named config.pool_id
is setup in the system. The config
argument is a Lua table holding the following keys:
pool_id
: Name of the connection pool (mandatory).
size
: Size of the connection pool (default is 5).
login
: MongoDB login name
password
: MongoDB password for login.
host
: Host name for the MongoDB server (default is localhost)
port
: Port that the MongoDB server is listening on (default is 27017).
database
: MongoDB database name.
w_mode
: Set mode for writes either to "unsafe" or "safe".
r_mode
: Set mode for reads either to "master" or "slave_ok".
This call throws a badarg error in case it cannot setup the pool otherwise it returns true
.
insert
Insert the provided document (or list of documents) into the collection.
pool_id
: Name of the connection pool to use for this statement.
collection
: Name of a MongoDB collection.
doc_or_docs
: A single Lua table or a Lua array containing multiple Lua tables.
The provided document can set the document id using the _id
key. If the id isn't provided one gets autogenerated. The call returns the inserted document(s) or throws a badarg error if it cannot insert the document(s).
update
Updates all documents in the collection that match the given selector.
pool_id
: Name of the connection pool to use for this statement.
collection
: Name of a MongoDB collection.
selector
: A single Lua table containing the filter properties.
doc
: A single Lua table containing the update properties.
The call returns true
or throws a badarg error if it cannot update the document(s).
delete
Deletes all documents in the collection that match the given selector.
pool_id
: Name of the connection pool to use for this statement.
collection
: Name of a MongoDB collection.
selector
: A single Lua table containing the filter properties.
The call returns true
or throws a badarg error if it cannot delete the document(s).
find
Finds all documents in the collection that match the given selector.
pool_id
: Name of the connection pool to use for this statement.
collection
: Name of a MongoDB collection.
selector
: A single Lua table containing the filter properties.
args
: A Lua table that currently supports an optional projector=LuaTable
element.
The call returns a MongoDB cursor or throws a badarg error if it cannot setup the iterator.
next
Fetches next available document given a cursor object obtained via find
.
The call returns the next available document or false
if all documents have been fetched.
take
Fetches the next nr_of_docs
documents given a cursor object obtained via find
.
The call returns a Lua array containing the documents or false
if all documents have been fetched.
close
Closes and cleans up a cursor object obtained via find
.
The call returns true
.
find_one
Finds the first document in the collection that matches the given selector.
pool_id
: Name of the connection pool to use for this statement.
collection
: Name of a MongoDB collection.
selector
: A single Lua table containing the filter properties.
args
: A Lua table that currently supports an optional projector=LuaTable
element.
The call returns the matched document or false
in case no document was found.
ensure_pool
Ensures that the connection pool named config.pool_id
is setup in the system. The config
argument is a Lua table holding the following keys:
pool_id
: Name of the connection pool (mandatory).
size
: Size of the connection pool (default is 5).
password
: Redis password for login.
host
: Host name for the Redis server (default is localhost)
port
: Port that the Redis server is listening on (default is 6379).
database
: Redis database (default is 0).
This call throws a badarg error in case it cannot setup the pool otherwise it returns true
.
cmd
Executes the given Redis command.
pool_id
: Name of the connection pool
command
: Redis command string.
args...
: Extra args.
This call returns a Lua table, true
, false
, or nil
. In case it cannot parse the command a badarg error is thrown.
ensure_pool
Ensures that the pool named config.pool_id
is setup in the system, The config
argument is a lua table holding the following keys:
pool_id
: Name of the connection pool (mandatory).
size
: Size of the connection pool (default is 5).
host
: Host name for the Memcached server (default is localhost)
port
: Port that the Redis server is listening on (default is 11211).
This call throws a badarg error in case it cannot setup the pool otherwise it returns true
.
flush_all(pool_id)
Flushes all data from the Memcached server. Use with care.
Returns true
.
get(pool_id, key)
Get data for key key
.
Returns the data for the key and otherwise false
.
set(pool_id, key, value, expiration)
Unconditionally set a value for a key.
key
: Key.
value
: Value.
expiration
time until key/value pair is deleted in seconds. This
parameter is optional with default value 0
(no expiration).
Returns value
.
add(pool_id, key, value, expiration)
Add a key/value pair if the key doesn't already exist.
key
: Key.
value
: Value.
expiration
time until key/value pair is deleted in seconds. This
parameter is optional with default value 0
(no expiration).
Returns value
if key
didn't already exist, false
otherwise.
replace(pool_id, key, value, expiration)
Replace a key/value pair if the key already exists.
key
: Key.
value
: Value.
expiration
time until key/value pair is deleted in seconds. This
parameter is optional with default value 0
(no expiration).
Returns value
if key
already exists, false
otherwise.
delete(pool_id, key)
Delete key
and the associated value.
Returns true
if the key/value pair was deleted, false
otherwise
ensure_pool
Ensures that the connection pool named config.pool_id
is setup in the system. The config
argument is a Lua table holding the following keys:
pool_id
: Name of the connection pool (mandatory).
size
: Size of the connection pool (default is 10).
This call throws a badarg error in case it cannot setup the pool otherwise it returns true
.
get, put, post, delete
Executes a HTTP request with the given url and args.
url
: A valid http url.
body
: optional body to be included in the request.
headers
: optional Lua table containing extra headers to be included in the request.
This call returns false
in case of an error or a Lua table of the form:
body
Fetches the response body given a client ref obtained via the response Lua table.
This call returns false
in case of an error or the response body.
encode
Encodes a Lua value to a JSON string.
This call returns false if it cannot encode the given value.
decode
Decodes a JSON string to a Lua value.
This call returns false if it cannot decode the JSON string.
Uses the VerneMQ logging infrastructure to log the given log_string
.
You need to know about and configure a couple of Operating System and Erlang VM configs to operate VerneMQ efficiently. First, make sure you have set appropriate OS file limits according to our guide here. Second, when you run into performance problems, don't forget to check the settings in the vernemq.conf
file. (Can't open more than 10k connections? Well, is the listener configured to open more than 10k?)
This is the number one topic to look at, if you need to keep an eye on RAM usage.
Context: All network I/O in Erlang uses an internal driver. This driver will allocate and handle an internal application side buffer for every TCP connection. The default size of these buffers will determine your overall RAM use in VerneMQ. The sndbuf and recbuf of the TCP socket will not count towards VerneMQ RAM, but will be used by the Linux Kernel.
VerneMQ calculates the buffer size from the OS level TCP send and receive buffers:
val(buffer) >= max(val(sndbuf),val(recbuf))
Those values correspond to net.ipv4.tcp_wmem
and net.ipv4.tcp_rmem
in your OS's sysctl configuration. One way to minimize RAM usage is therefore to configure those settings (Debian example):
This would result in a 32KB application buffer for every connection.
If your VerneMQ use case requires the use of different TCP buffer optimisations (per groups of clients for instance) you will have to make sure the that the Linux OS buffer configuration, namely net.ipv4.tcp_wmem
and net.ipv4.tcp_rmem
allows for this kind of flexibility, allowing for small TCP buffers and big TCP buffers at the same time.
Example 1 above would allow VerneMQ to allocate minimal TCP read and write buffers of 4KB in the Linux Kernel, a max read buffer of 32KB in the kernel, and a max write buffer of 65KB in the kernel. VerneMQ itself would set its own internal per connection buffer to 65KB in addition.
�What we just described is VerneMQ automatically configuring TCP read and write buffers and internal buffers, deriving their values from OS settings.
There are multiple additional ways to configure TCP buffers described below:
If VerneMQ finds an advanced.config
file, it will use the buffer sizes you have configured there for all it’s TCP listeners (and the TCP connections accepted by those listeners), except the Erlang distribution listeners within the cluster.
(You'll find an example in the section below on the advanced.config
file)
If VerneMQ finds a per protocol configuration (listener.tcp.buffer_sizes
) in the vernemq.conf
file, it will use those buffer sizes for the specific protocol. (currently only MQTT or MQTTS. Support for WS/WSS/HTTP/VMQ listeners is on the roadmap).
For listener.tcp.buffer_sizes
you’ll always have to state 3 values in bytes: the TCP receive buffer (recbuf), the TCP send buffer (sndbuf), and the internal application side buffer (buffer). You should set “buffer” (the 3rd value) toval(buffer) >= max(val(sndbuf),val(recbuf))
If VerneMQ finds per listener config values (listener.tcp.my_listener.buffer_sizes
), it will use those buffer sizes for all connections setup by that specific listener. This is the most useful approach if you want to set specific different buffer sizes, like huge send buffers for listeners that accept massive consumers. (consumers with high expected message throughput).
You would then configure a different listener for those massive consumers, and by that have the option to fine tune the TCP buffer sizes.
For listener.tcp.my_listener.buffer_sizes
you’ll always have to state 3 values in bytes: the TCP receive buffer (recbuf), the TCP send buffer (sndbuf), and an internal application side buffer (buffer). You should set “buffer” (the 3rd value) toval(buffer) >= max(val(sndbuf),val(recbuf))
This scenario would be possible with a plugin.�
The advanced.config
file is a supplementary configuration file that sits in the same directory as the vernemq.conf
. You can set additional config values for any of the OTP applications that are part of a VerneMQ release. To just configure the TCP buffer size manually, you can create an advanced.config
file:
For very advanced & custom configurations, you can add a vm.args
file to the same directory where the vernemq.conf
file is located. Its purpose is to configure parameters for the Erlang Virtual Machine. This will override any Erlang specific parameters your might have configured via the vernemq.conf
. Normally, VerneMQ auto-generates a vm.args file for every boot in /var/lib/vernemq/generated.configs/
(Debian package example) from vernemq.conf
and other potential configuration sources.
A manually generated vm.args
is not supplementary, it is a full replacement of the auto-generated file! Keep that in mind. An easy way to go about this, is by copying and extending the auto-generated file.
This is how a vm.args
might look like:
Using TLS will of course increase the CPU load during connection setup. Latencies in message delivery will be increased, and your overall message throughput per second will be lower.
TLS will require considerably more RAM. Instead of 2 Erlang processes per connection, TLS will use 3. You'll have a session process, a queue process, and a TLS handler process that can encapsulate quite a big state (> 30KB).
Erlang/OTP uses its own TLS implementation, only using OpenSSL for crypto, but not connection handling. For situations with high connection setup rate or overall high connection churn rate, the Erlang TLS implementation might be too slow. On the other hand, Erlang TLS gives you great concurrency & fault isolation for long-lived connections.
Some Erlang deployments terminate SSL/TLS with an external component or with a load balancer component. Do some testing & try to find out what works best for you.
The Erlang TLS implementation is rather picky on certificate chains & formats. Don't give up, if you encounter errors first. On Linux, you can find out more with the openssl s_client
command quickly.
A guide that shows how to change the open file limtits
VerneMQ can consume a large number of open file handles when thousands of clients are connected as every connection requires at least one file handle.
Most operating systems can change the open-files limit using the ulimit -n
command. Example:
However, this only changes the limit for the current shell session. Changing the limit on a system-wide, permanent basis varies more between systems.
On most Linux distributions, the total limit for open files is controlled by sysctl
.
As seen above, it is generally set high enough for VerneMQ. If you have other things running on the system, you might want to consult the sysctl manpage manpage for how to change that setting. However, what most needs to be changed is the per-user open files limit. This requires editing /etc/security/limits.conf
, for which you'll need superuser access. If you installed VerneMQ from a binary package, add lines for the vernemq
user like so, substituting your desired hard and soft limits:
On Ubuntu, if you’re always relying on the init scripts to start VerneMQ, you can create the file /etc/default/vernemq and specify a manual limit like so:
This file is automatically sourced from the init script, and the VerneMQ process started by it will properly inherit this setting. As init scripts are always run as the root user, there’s no need to specifically set limits in /etc/security/limits.conf
if you’re solely relying on init scripts.
On CentOS/RedHat systems, make sure to set a proper limit for the user you’re usually logging in with to do any kind of work on the machine, including managing VerneMQ. On CentOS, sudo
properly inherits the values from the executing user.
Systemd allows you to set the open file limit. The LimitNOFILE parameter defines the maximum number of file descriptors that a service or system unit can open. In the past, "infinite" was often chosen, which actually means an OS/systemD dependent maximum number. However, in recent versions of systemd like RHEL 9, CentOS Stream 9, and others, the default value is set to around a billion, significantly higher than necessary and the defaults used in older distributions. It is advisable to set a reasonable default value for LimitNOFILE based on the specific use case. Please consult https://access.redhat.com/solutions/1479623 for more information (RHEL9).
It can be helpful to enable PAM user limits so that non-root users, such as the vernemq
user, may specify a higher value for maximum open files. For example, follow these steps to enable PAM user limits and set the soft and hard values for all users of the system to allow for up to 65536 open files.
Edit /etc/pam.d/common-session
and append the following line:
If /etc/pam.d/common-session-noninteractive
exists, append the same line as above.
Save and close the file.
Edit /etc/security/limits.conf
and append the following lines to the file:
Save and close the file.
(optional) If you will be accessing the VerneMQ nodes via secure shell (ssh), you should also edit /etc/ssh/sshd_config
and uncomment the following line:
and set its value to yes
as shown here:
Restart the machine so that the limits to take effect and verify
that the new limits are set with the following command:
Edit /etc/security/limits.conf
and append the following lines to
the file:
Save and close the file.
Restart the machine so that the limits to take effect and verify that the new limits are set with the following command:
In the above examples, the open files limit is raised for all users of the system. If you prefer, the limit can be specified for the vernemq
user only by substituting the two asterisks (*) in the examples with vernemq
.
In Solaris 8, there is a default limit of 1024 file descriptors per process. In Solaris 9, the default limit was raised to 65536. To increase the per-process limit on Solaris, add the following line to /etc/system
:
Reference:
To check the current limits on your Mac OS X system, run:
The last two columns are the soft and hard limits, respectively.
To adjust the maximum open file limits in OS X 10.7 (Lion) or newer, edit /etc/launchd.conf
and increase the limits for both values as appropriate.
For example, to set the soft limit to 16384 files, and the hard limit to 32768 files, perform the following steps:
Verify current limits:
The response output should look something like this:
Edit (or create) /etc/launchd.conf
and increase the limits. Add lines that look like the following (using values appropriate to your environment):
Save the file, and restart the system for the new limits to take effect. After restarting, verify the new limits with the launchctl limit command:
The response output should look something like this:
Attributions
This work, "Open File Limits", is a derivative of Open File Limits by Riak, used under Creative Commons Attribution 3.0 Unported License. "Open File Limits" is licensed under Creative Commons Attribution 3.0 Unported License by Erlio GmbH.
This guide describes how to deploy a VerneMQ cluster on Kubernetes
Kubernetes (K8s) is possibly the most mature technology for deploying Docker containers at scale. While running a single Docker container is supposed to be easy, running a Kubernetes cluster definitely isn't. That's why we recommended to work with a certified Kubernetes partner such as Amazon AWS EKS, Google Cloud GKE, Microsoft Azure AKS, or DigitalOcean.
If your applications already live in Docker containers and are deployed on Kubernetes it can be beneficial to also run VerneMQ on Kubernetes. This guide covers how to successfully deploy a VerneMQ cluster on Kubernetes. Multiple options exist to deploy a VerneMQ cluster at this point. This guide describes how to use the official Helm chart as well as the still experimental Kubernetes Operator.
For the sake of clarity, this guide defines the following terms:
Kubernetes Node: A single virtual or physical machine in a Kubernetes cluster.
Kubernetes Cluster: A group of nodes firewalled from the internet, that are the primary compute resources managed by Kubernetes.
Edge router: A router that enforces the firewall policy for your cluster. This could be a gateway managed by a cloud provider or a physical piece of hardware.
Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
Service: A Kubernetes Service that identifies a set of pods using label selectors. Unless mentioned otherwise, Services are assumed to have virtual IPs only routable within the cluster network
VerneMQ Cluster: A group of VerneMQ containers that are connected via the Erlang Distribution as well as the VerneMQ clustering mechanism.
This guide assumes that you're familiar with Kubernetes
Helm calls itself the package manager for Kubernetes. In Helm a package is called a chart. VerneMQ comes with such a Helm chart simplifying the initial setup tremendously. If you don't have setup Helm yet, please navigate through their quickstart guide.
Once Helm is properly setup just run the following command in your shell.
This will deploy a single node VerneMQ cluster. Have a look at the possible configuration here.
A Kubernetes Operator is a method of packaging, deploying and managing a Kubernetes application. The VerneMQ Operator is basically just a Pod with the task to deploy a VerneMQ cluster given a so called Custom Resource Definition (CRD). The VerneMQ CRD aims that all required configuration can be made through the CRD and no further configuration should be required. The following command installs the operator along a two node VerneMQ cluster into the namespace messaging
This will result in the following Pods:
And the following cluster status:
At this point you would like to further configure authentication and authorization. The following port forwards may be useful at this point.
kubectl port-forward svc/vernemq-k8s --namespace messaging 1883:1883
kubectl port-forward svc/vernemq-k8s --namespace messaging 8888:8888
In a VerneMQ cluster it doesn't matter to which node a MQTT client connects, subscribes or publishes. A VerneMQ cluster looks like one big MQTT broker to the outside. While this is the main idea of VerneMQ it comes with a cost, namely the data replication/synchronization overhead when 'persistent' clients hop from one pod to the other. As a consequence, we recommend to intelligently choose how to load balance your MQTT clients.
Load balancing in Kubernetes is configured via the Service object. Multiple service types exist:
The ClusterIP type is the default and only permits access from within the Kubernetes cluster. Other pods in the Kubernetes cluster can access VerneMQ via ClusterIP:Port
. The underlying balancing strategy is based on the settings of kube-proxy. Also this type requires that one terminates TLS either in VerneMQ directly or via a different Pod e.g. HAproxy.
The NodePort type uses ClusterIP under the hood but allocates a Port on every Kubernetes node and routes incoming traffic from NodeIP:NodePort
to the ClusterIP:Port
. Like with ClusterIP this type requires that one terminates TLS either in VerneMQ directly or via a different Pod e.g. HAproxy.
The Loadbalancer type uses an external load balancer provided by the cloud provider. In fact this Service type only provides the glue code required to interact with the Loadbalancing services from different cloud providers. If you're running a bare-metal Kubernetes cluster you won't be able to use this Service type, unless you deploy a Kubernetes aware network loadbalancer yourself. Check out MetalLB, which provides a network loadbalancer for bare-metal Kubernetes clusters.
Every Kubernetes node runs a kube-proxy. kube-proxy maps virtual IP addresses to services and creates the required routes in the system so that pods can communicate with each other.
kube-proxy supports multiple modes of operation:
userspace since v1.0
iptables default since v1.2
ipvs stable since v1.11, only available if the Kernel of the Kubernetes node supports it.
The performance and scalability characteristics of VerneMQ depend on the proxy-mode and the related configurations. This is especially true for load-balancing specific functionality such as session affinity. E.g. only ipvs supports an efficient way to provide session affinity via the source hashing strategy.
Ingress controllers provide another way to do load balancing and TLS termination in a Kubernetes cluster. However the officially supported ingress controllers nginx and GCE focus on balancing HTTP requests instead of plain TCP connections. Therefore their support for TLS termination is also limited to HTTPS.
Multiple third-party ingress controllers exist, however most of them focus on handling HTTP requests. One of the exceptions is Voyager by AppsCode an ingress controller based on HAProxy, which also efficiently terminates TLS.
Use an external loadbalancer provided by the cloud provider that is capable of terminating TLS and apply a load balancing strategy that provides session affinity e.g. via source hashing.
Terminate TLS outside VerneMQ.
Configure the Pod NodeAffinity correctly to ensure that only one VerneMQ pod is scheduled on any Kubernetes cluster node.
It's preferred to have a smaller number of Pods that are very powerful in terms of available CPU and RAM than the opposite.
This describes a quick way to create a VerneMQ cluster on developer's machines
Sometimes you want to have a quick way to test a cluster on your development machine as a VerneMQ developer.
You need to take care of a couple things if you want to run multiple VerneMQ instances on the same machine. There is a make
option that let's you build multiple releases, as a commodity, taking care of all the configuration.
First, build a normal release (this is just needed the first time) with:
➜ default git:(master) ✗ make rel
The following command will then prepare 3 correctly configured vernemq.conf files, with different ports for the MQTT listeners etc. It will also build 3 full VerneMQ releases.
➜ default git:(master) ✗ make dev1 dev2 dev3
Check if you have the 3 new releases in the _build
directory of your VerneMQ code repo.
You can then start the respective broker instances in 3 terminal windows, by using the respective commands and directory paths. Example:
➜ (_build/dev2/rel/vernemq/bin) ✗ vernemq console
The MQTT listeners will of course be configured differently for each node (the default 1883 port is not used, so that you can still run a default MQTT broker besides your dev nodes). A couple of other ports are also adapted (HTTP status page, cluster communication). The MQTT ports are automically configured in increasing steps of 50: (if in doubt, consult the respective vernemq.conf
files)
Node | MQTT listener port |
---|---|
Note that the dev nodes are not automatically clustered. You still need to manually cluster them with commands like the following:
➜ (_build/dev2/rel/vernemq/bin) ✗ vmq-admin cluster join discovery-node=dev1@127.0.0.1
In case this wasn't clear so far: You can configure an arbitrary number of cluster nodes, from dev1 to devn.
dev1@127.0.0.1
10053
dev2@127.0.0.1
10103
dev3@127.0.0.1
10153
...
...