Only this pageAll pages
Powered by GitBook
1 of 68

2.1.0

Loading...

Loading...

Installing VerneMQ

Loading...

Loading...

Loading...

Configuring VerneMQ

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

VerneMQ Clustering

Loading...

Loading...

Loading...

Live Administration

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Monitoring

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Plugin Development

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Misc

Loading...

Loading...

Loading...

Guides

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Installing on Debian and Ubuntu

VerneMQ can be installed on Debian or Ubuntu-based systems using the binary package we provide.

Install VerneMQ

Once you have downloaded the binary package, execute the following command to install VerneMQ:

Note: Replace bionic with appropriate OS version such as focal/trusty/xenial.

Verify your installation

You can verify that VerneMQ is successfully installed by running:

If VerneMQ has been installed successfully Status: install ok installed is returned.

Activate VerneMQ node

To use the provided binary packages the VerneMQ EULA must be accepted. See for more information.

Once you've installed VerneMQ, start it on your node:

Default Directories and Paths

The whereis vernemq command will give you a couple of directories:

Path
Description

Next Steps

Now that you've installed VerneMQ, check out .

Getting Started

A quick and simple guide to get started with VerneMQ

Installing VerneMQ

VerneMQ is a high-performance, distributed MQTT message broker. It scales horizontally and vertically on commodity hardware to support a high number of concurrent publishers and consumers while maintaining low latency and fault tolerance. To use it, all you need to do is install the VerneMQ package.

Choose your OS and follow the instructions:

Installing on CentOS and RHEL

VerneMQ can be installed on CentOS-based systems using the binary package we provide.

Install VerneMQ

Once you have downloaded the binary package, execute the following command to install VerneMQ:

or:

Introduction

Everything you must know to properly configure VerneMQ

Every VerneMQ node has to be configured as the default configuration probably does not match your needs. Depending on the installation method and chosen platform the configuration file vernemq.conf resides at different locations. If VerneMQ was installed through a Linux package the default location for the configuration file is /etc/vernemq/vernemq.conf.

General Format of the vernemq.conf file

Non-standard MQTT options

Configure Non-Standard MQTT Options VerneMQ Supports.

Maximum Client Id Size

Set the maximum size for client ids, MQTT v3.1 specifies a limit of 23 characters.

This option default to 23.

sudo dpkg -i vernemq-<VERSION>.bionic.x86_64.deb

A single setting is handled on one line.

  • Lines are structured Key = Value

  • Any line starting with # is a comment, and will be ignored.

  • Minimal Quickstart Configuration

    You certainly want to try out VerneMQ right away. To just check the broker without configured authentication for now, you can allow anonymous access:

    • Set allow_anonymous = on

    By default the vmq_acl authorization plugin is enabled and configured to allow publishing and subscribing to any topic (basically allowing everything), check the section on file-based authorization for more information.

    Setting allow_anonymous=on completely disables authentication in the broker and plugin authentication hooks are never called! Find the details on all the authentication hooks here. In a production system you should configure vmq_acl to be less permissive or configure some other plugin to handle authorization.

    /usr/sbin/vernemq:

    the vernemq and vmq-admin commands

    /usr/lib/vernemq

    the vernemq package

    /etc/vernemq

    the vernemq.conf file

    /usr/share/vernemq

    the internal vernemq schema files

    /var/lib/vernemq

    the vernemq data dirs for LevelDB (Metadata Store and Message Store)

    Accepting the VerneMQ EULA
    How to configure VerneMQ
    CentOS/RHEL
  • Debian/Ubuntu

  • It is also possible to run VerneMQ using our Docker image:

    • Docker

    Starting VerneMQ

    If you built VerneMQ from sources, you can add the /bin directory of your VerneMQ release to PATH. For example, if you compiled VerneMQ in the /home/vernemq directory, then add the binary directory (/home/vernemq/_build/default/rel/vernemq/bin) to your PATH, so that VerneMQ commands can be used in the same manner as with a packaged installation.

    To start a VerneMQ broker, use the vernemq start command in your Shell:

    A successful start will return no output. If there is a problem starting the broker, an error message is printed to STDERR.

    To run VerneMQ with an attached interactive Erlang console:

    A VerneMQ broker is typically started in console mode for debugging or troubleshooting purposes. Note that if you start VerneMQ in this manner, it is running as a foreground process that will exit when the console is closed.

    You can close the console by issuing this command at the Erlang prompt:

    Once your broker has started, you can initially check that it is running with the vernemq ping command:

    The command will respond with pong if the broker is running or Node <NodeName> not responding to pings in case it’s not.

    As you may have noticed, VerneMQ will warn you at startup when your system’s open files limit (ulimit -n) is too low. You’re advised to increase the OS default open files limit when running VerneMQ. Read more about why and how in the Open Files Limit documentation.

    Starting using systemd/systemctl

    If you use a systemd service file (as in the binary packages), you can start VerneMQ using the systemctl interface to systemd:

    Other systemctl commands work as well:

    Activate VerneMQ node

    To use the provided binary packages the VerneMQ EULA must be accepted. See Accepting the VerneMQ EULA for more information.

    Once you've installed VerneMQ, start it on your node:

    Verify your installation

    You can verify that VerneMQ is successfully installed by running:

    If VerneMQ has been installed successfully vernemq is returned.

    Next Steps

    Now that you've installed VerneMQ, check out How to configure VerneMQ.

    sudo yum install vernemq-<VERSION>.centos7.x86_64.rpm
    Maximum Topic Depth

    Usually, you'll configure permissions on your topic structures using ACLs. In addition to that, topic_max_depth sets a global maximum value for topic levels. This protects the broker from clients subscribing to arbitrary deep topic levels.

    The default value for topic_max_depth is 10. As an example, this value will allow topics like a/b/c/d/e/f/g/h/i/k, that is 10 levels. A client running into the topic depth limit will be disconnected and an error will be logged.

    Persistent Client Expiration

    This option allows persistent clients (those with clean_session set to false) to be removed if they do not reconnect within a certain time frame.

    This is a non-standard option. As far as the MQTT specification is concerned, persistent clients are persisted forever.

    The expiration period should be an integer followed by one of h, d, w, m, y for hour, day, week, month, and year; or never:

    This option defaults to never.

    Message Size Limit

    Limit the maximum publish payload size in bytes that VerneMQ allows. Messages that exceed this size won't be accepted.

    Defaults to 0, which means that all valid messages are accepted. MQTT specification imposes a maximum payload size of 268435455 bytes.

    dpkg -s vernemq | grep Status
    service vernemq start
    whereis vernemq
    vernemq: /usr/sbin/vernemq /usr/lib/vernemq /etc/vernemq /usr/share/vernemq
    vernemq start
    vernemq console
    q().
    vernemq ping
    $ sudo systemctl start vernemq
    $ sudo systemctl stop vernemq
    $ sudo systemctl status vernemq
    sudo rpm -Uvh vernemq-<VERSION>.centos7.x86_64.rpm
    service vernemq start
    rpm -qa | grep vernemq
    max_client_id_size = 23
    topic_max_depth = 20
    persistent_client_expiration = 1w
    max_message_size = 0

    Websockets

    Configure WebSocket Listeners for VerneMQ.

    VerneMQ supports the WebSocket protocol out of the box. To be able to open a WebSocket connection to VerneMQ, you have to configure a WebSocket listener or Secure WebSocket listener in the vernemq.conf file first:

    listener.ws.default = 127.0.0.1:9001
    
    listener.wss.wss_default = 127.0.0.1:9002
    # To use WSS, you'll have to configure additional options for your WSS listener (called `wss_default` here):
    listener.wss.wss_default.cafile = ./etc/cacerts.pem
    listener.wss.wss_default.certfile = ./etc/cert.pem
    listener.wss.wss_default.keyfile = ./etc/key.pem

    Keep in mind that you'll use MQTT-over-WebSocket, so you will need a Javascript library that implements the MQTT client behaviour. We have used the Eclipse Paho client as well as MQTT.js

    You won't be able to open WebSocket connections on a base URL, always add the /mqtt path.

    When establishing a WebSocket connection to the VerneMQ MQTT broker, the process begins with an HTTP connection that is then upgraded to WebSocket. This upgrade mechanism means the broker's ability to accept connections can be influenced by HTTP listener settings.

    In certain scenarios, such as when connecting from a frontend application, the size of HTTP request headers (including cookies) can exceed the default maximum allowed by VerneMQ. This can lead to a 'HTTP 431 Request Header Fields Too Large' error, preventing the connection from being established.

    This behavior is configurable in the vernemq.conf file to accommodate larger headers:

    Logging

    Configure VerneMQ Logging.

    Console Logging

    Where should VerneMQ emit the default console log messages (which are typically at info severity):

    log.console = off | file | console | both

    VerneMQ defaults to log the console messages to a file, which can specified by:

    This option defaults to /var/log/vernemq/console.log for Ubuntu, Debian, RHEL and Docker installs.

    The default console logging level info could be setting one of the following:

    Error Logging

    VerneMQ log error messages by default. One can change the default behaviour by setting:

    VerneMQ defaults to log the error messages to a file, which can specified by:

    This option defaults to /var/log/vernemq/error.log for Ubuntu, Debian, RHEL and Docker installs.

    Crash Logging

    VerneMQ log crash messages by default. One can change the default behaviour by setting:

    VerneMQ defaults to log the crash messages to a file, which can specified by:

    This option defaults to /var/log/vernemq/crash.log for Ubuntu, Debian, RHEL and Docker installs.

    The maximum sizes in bytes of individual messages in the crash log defaults to 64KB but can be specified by:

    VerneMQ rotate crash logs. By default, the crash log file is rotated at midnight or when the size exceeds 10MB. This behaviour can be changed by setting:

    The default number of rotated log files is 5 and can be set with the option:

    SysLog

    VerneMQ supports logging to SysLog, enable it by setting:

    Logging to SysLog is disabled by default.

    HTTP Listeners

    How to setup and configure the HTTP listener.

    The VerneMQ HTTP listener is used to serve various VerneMQ subsystems such as:

    • Status page

    • Prometheus metrics

    • management API

    By default listener runs on port 8888. To disable the HTTP listener, use a HTTPS listener instead or change the port, adapt the configuration in vernemq.conf:

    You can have multiple HTTP(s) listener listening to different port and running different modules:

    This configuration snippet defines two HTTPS listeners with different modules. One for default traffic and one for management traffic. It specifies which HTTP modules will be enabled on each listener, allowing for status, health, and metrics information to be retrieved from the default listener and providing a web-based interface for managing and monitoring VerneMQ through the management listener.

    MQTT Options

    Configure how VerneMQ handles certain aspects of MQTT

    Retry Interval

    Set the time in seconds after a QoS=1 or QoS=2 message has been sent that VerneMQ will wait before retrying when no response is received.

    retry_interval = 20

    This option default to 20 seconds.

    Inflight Messages

    This option defines the maximum number of QoS 1 or 2 messages that can be in the process of being transmitted simultaneously.

    Defaults to 20 messages, use 0 for no limit. The inflight window serves as a protection for sessions, on the incoming side.

    Load Shedding

    The maximum number of messages to hold in the queue above those messages that are currently in flight. Defaults to 1000. Set to -1 for no limit. This option protects a client session from overload by dropping messages (of any QoS).

    Defaults to 1000 messages, use -1 for no limit. This parameter was named max_queued_messages in 0.10.*. Note that 0 will totally block message delivery from any queue!

    This option specifies the maximum number of QoS 1 and 2 messages to hold in the offline queue.

    Defaults to 1000 messages, use -1 for no limit, use 0 if no messages should be stored.

    In contrast to the session based inflight window, max_online_messages and max_offline_messages serves as a protection of queues, on the outgoing side.

    When an offline session transits to online, by default VerneMQ will adhere to the queue sizes also for moving data from the offline queue to the online queue. Therefore, if max_offline_messages > max_online_message VerneMQ will start dropping messages. It is possible to override this behaviour and allow VerneMQ to move all messages from the offline queue to the online queue. The queue will then batched (or streamed) to the subscribers, and the messages are read from disk in batches as well. The additional memory needed thus is just the amount needed to store references to those messages and not the messages themselves.

    Welcome

    Welcome to the VerneMQ documentation! This is a reference guide for most of the available features and options of VerneMQ. The Getting Started guide might be a good entry point.

    The VerneMQ documentation is based on the VerneMQ Documentation project. Any changes on Github are automatically deployed to the VerneMQ online Documentation.

    For a more general overview on VerneMQ and MQTT, you might want to start with the introduction.

    For downloading the subscription-based binary VerneMQ packages and/or a quick description on how to compile VerneMQ from sources, see Downloads.

    How to help improve this documentation

    The is an open-source effort, and your contributions are very welcome and appreciated. You can contribute on all levels:

    • Language, style and typos

    • Fixing obvious documentation errors and gaps

    • Providing more details and/or examples for specific topics

    • Extending the documentation where you find this useful to do

    Note that the documentation is versioned according to the VerneMQ releases. You can click the "Edit on Github" button in the upper right corner of every page to check what branch and document you are on. You can then create a Pull Request (PR) against that branch from your fork of the VerneMQ documentation repository. (Direct edits on Github are possible for members of the documentation repository).

    Schema Files

    Schema Files in VerneMQ

    During every boot up, VerneMQ will run your vernemq.conf file against the Schema files of the VerneMQ release. This serves as a validation and as a mechanism to create the timestamped internal config files that you'll find in the generated.configs directory.

    In general, every application of the VerneMQ release has its own schema file in the priv subdirectory (the only exception is the vmq.schema file in the file directory). A my_app.schema defines all the configuration settings you can use for that application.

    And that's almost the only reason to know a bit about schema files: you can browse them for possible settings if you suspect a minor settings is not yet fully documented. Most of the time you'll also find at least a short snippet documenting the setting in the schema.file.

    An example from the vmq_server.schema:

    This is a relatively minor feature where you can set a default expiry for API keys. You can determine from the mapping schema that a default is not set. To set the value in the vernemq.conf file, always use the left-hand name from the mapping in the schema:

    You can also see the keyword hidden in the mapping. This means that the setting will not show up automatically in the vernemq.conf file and you'll have to add the it manually.

    log.console.file = /path/to/log/file
    VerneMQ Documentation project
    listener.http.default.max_request_line_length=32000
    listener.http.default.max_header_value_length=32000
    log.console.level = debug | info | warning | error
    listener.http.default = 127.0.0.1:8888
    listener.https.default = 127.0.0.1:443
    listeners.https.default.http_modules = vmq_status_http, vmq_health_http, vmq_metrics_http
    
    listener.https.mgmt = 127.0.0.1:444
    listeners.https.mgmt.http_modules = vmq_mgmt_http
    Health check
    HTTP Publish
    max_inflight_messages = 20
    %% @doc specifies the max duration of an API key before it expires (default: undefined)
    {mapping, "max_apikey_expiry_days", "vmq_server.max_apikey_expiry_days", 
                                                [{datatype, integer}, 
                                                {default, undefined},
                                                hidden
                                                                                    ]}.
    max_apikey_expiry_days = 30
    log.error = on | off
    log.error.file = /path/to/log/file
    log.crash = on | off
    log.crash.file = /path/to/log/file
    log.crash.maximum_message_size = 64KB
    ## Acceptable values:
    ##   - a byte size with units, e.g. 10GB
    log.crash.size = 10MB
    
    ## For acceptable values see https://github.com/basho/lager/blob/master/README.md#internal-log-rotation
    log.crash.rotation = $D0
    log.crash.rotation.keep = 5
    log.syslog = on
    max_online_messages = 1000
    max_offline_messages = 1000
    override_max_online_messages = off

    Enhanced Auth Flow

    VerneMQ supports enhanced authentication flows or SASL style authentication for MQTT 5.0 sessions. The enhanced authentication mechanism can be used for initial authentication when the client connects or to re-authenticate clients at a later point.

    The on_auth_m5 hook allows the plugin to implement SASL style authentication flows by either accepting, rejecting (disconnecting the client) or continue the flow. The on_auth_m5 hook is specified in the Erlang behaviour on_auth_m5_hook in the vernemq_dev repo.

    MQTT Listeners

    VerneMQ supports multiple ways to configure one or many MQTT listeners.

    Listeners specify on which IP address and port VerneMQ should accept new incoming connections. Depending on the chosen transport (TCP, SSL, WebSocket) different configuration parameters have to be provided. VerneMQ allows to write the listener configurations in a hierarchical manner, enabling very flexible setups. VerneMQ applies reasonable defaults on the top level, which can be of course overridden if needed.

    These are the only default parameters that are applied for all transports, and the only one that are of interest for plain TCP and WebSocket listeners.

    These global defaults can be overridden for a specific transport protocol listener.tcp.CONFIG = VAL, or even for a specific listener listener.tcp.LISTENER.CONFIG = VAL. The placeholder LISTENER is freely chosen and is only used as a reference for further configuring this particular listener.

    Managing Listeners

    Managing VerneMQ tcp listeners

    You can configure as many listeners as you wish in the vernemq.conf file. In addition to this, the vmq-admin listener command let's you configure, start, stop and delete listeners on the fly. Those can be MQTT, WebSocket or Cluster listeners, in the command line output they will be tagged mqtt, ws or vmq accordingly.

    To get info on a listener sub-command, invoke it with the --help option. Example: vmq-admin listener start --help

    Introduction

    On every VerneMQ node you'll find the vmq-admin command line tool in the release's bin directory (in case you use the binary VerneMQ packages, vmq-admin should already be callable in your path, without changing directories). It has different sub-commands that let you check for status, start and stop listeners, re-configure values and a couple of other administrative tasks.

    vmq-admin has different sub-commands with a lot of respective options. You can familiarize yourself by using the --help option on the different levels of vmq-admin. You might see additional sub-commands in case integrated plugins are running (vmq-admin bridge is an example).

    Introduction

    Description and Configuration of the built-in Monitoring mechanism

    VerneMQ can be monitored in several ways. We implemented native support for , , and .

    The metrics are also available via the command line tool:

    Or with:

    Which will output the metrics together with a short description describing what the metric is about. An example looks like:

    Notice that the metrics:

    Are no longer used (always 0) and will be removed in the future. They were replaced with mqtt_connack_sent using the return_code label. For MQTT 5.0 the reason_code label is used instead.

    The output on the command line are aggregated by default, but details for a label can be shown as well, for example all metrics with the not_authorized

    Certificate Management

    Certificate management

    VerneMQ supports different Transport Layer Security (TLS) options, which allow for secure communication between MQTT clients and VerneMQ. Certificates typically have only a limited validity (for example one year) after which they have to be replaced. VerneMQ allows to replace a certificate without interrupting active connections.

    Replace a certificate

    Replacing a certificate straightforward. One just need to replace (overwrite) the corresponding PEM files. VerneMQ will pickup the new certificates.

    For example, if you have the following configuration

    the files cacerts.pem, cert.pem and key.pem can be overwritten (on the filesystem!) with new certificates. VerneMQ will pick-up the certificate after some time (by default around 2min). It is possible to invalidate the certificate immedialtly by issuing the following command

    Graphite

    Description and Configuration of the Graphite exporter

    The graphite exporter reports the broker metrics at a fixed interval (defined in milliseconds) to a graphite server. The necessary configuration is done inside the vernemq.conf.

    You can further tune the connection to the Graphite server:

    The above configuration parameters can be changed at runtime using the vmq-admin script. Usage: vmq-admin set = ... [[--node | -n] | --all] Example: vmq-admin set graphite_interval=20000 graphite_port=2003 -n [email protected]

    Status Page

    The VerneMQ Status Page

    VerneMQ comes with a built-in Status Page that is enabled by default and is available on http://localhost:8888/status, see .

    The Status Page is a simple overview of the cluster and the individual nodes in the cluster as seen below. Note that while the Status Page is running on each node of the cluster, it's enough to look at one of them to get a quick status of your cluster.

    The Status Page has the following sections:

    • Issues (Warnings on netsplits, etc)

    Health Checker

    The VerneMQ health checker

    A simple way to gauge the health of a VerneMQ cluster is to query the /health path on the .

    The health check will return 200 when VerneMQ is accepting connections and is joined with the cluster (for clustered setups). 503 will be returned in case any of those two conditions are not met. In addition to the simple /health path, the following options are available as well

    • /health/ping: Cowboy (ie. Verne) is up.

    $SYSTree

    Description and Configuration of the $SYSTree Monitoring Feature

    The systree functionality is enabled by default and reports the broker metrics at a fixed interval defined in the vernemq.conf. The metrics defined are transformed to MQTT topics e.g. mqtt_publish_received is transformed to $SYS/<nodename>/mqtt/publish/received. <nodename> is your node's name, as configured in the vernemq.conf. To find it, you can grep the file for it: grep nodename vernemq.conf

    The complete list of metrics can be found

    This option defaults to 20000 milliseconds.

    If the systree feature is not required it can be disabled in vernemq.conf

    /health/listeners: will fail if any of the configured listeners is down or suspended

  • /health/listeners_full_cluster: will fail if any listener is down or any of the cluster nodes is offline. (you probably don't want to use this to base automated actions on the status)

  • With the ping or listeners option, you can configure a health check for a single node, even if it is part of a cluster.

    If you want to configure any automated actions based on the health check results, you need to chose an appropriate health check path. For example, you should not use the /health check (checking for full cluster consistency) to automatically restart a single node. This is of special importance for Kubernetes deployments.

    HTTP listener

    vmq-admin works by RPC'ing into the local VerneMQ node by default. For most commands you can add a --node option and set values on other cluster nodes, even if the local VerneMQ node is down.

    To check for the global cluster state in case the local VerneMQ node is down, you'll have to go to another node though.

    vmq-admin uses RPC to connect to some node. By default, it has a timeout of 60secs before vmq-admin terminates with a RPC timeout. Sometimes a call (for example cluster leave) might need more time. In that case, you can set a different timeout with vmq-admin -rpctimeout timeoutsecs or even -rpctimeout infinity.

    vmq-admin is a live re-configuration utility. Please note that all dynamically configured values will be reset by vernemq.conf upon broker restart. As a consequence, it's good practice to keep track of the applied changes when re-configuring a broker with vmq-admin. If needed, you can then persist changes by adding them to the vernemq.conf file.

    $ sudo vmq-admin --help       
    Usage: vmq-admin <sub-command>
    
      Administrate the cluster.
    
      Sub-commands:
        node        Manage this node
        cluster     Manage this node's cluster membership
        session     Retrieve session information
        retain      Show and filter MQTT retained messages
        plugin      Manage plugin system
        listener    Manage listener interfaces
        metrics     Retrieve System Metrics
        api-key     Manage API keys for the HTTP management interface
        trace       Trace various aspects of VerneMQ
      Use --help after a sub-command for more details.
    graphite_enabled = on
    graphite_host = carbon.hostedgraphite.com
    graphite_port = 2003
    graphite_interval = 20000
    graphite_api_key = YOUR-GRAPHITE-API-KEY
    graphite.interval = 15000
    # set the connect timeout (defaults to 5000 ms)
    graphite_connect_timeout = 10000
    
    # set a reconnect timeout (default to 15000 ms)
    graphite_reconnect_timeout = 10000
    
    # set a custom graphite prefix (defaults to '')
    graphite_prefix = vernemq

    Inter-node Communication

    Everything you must know to properly configure and deploy a VerneMQ Cluster

    VerneMQ uses the Erlang distribution mechanism for most inter-node communication. VerneMQ identifies other machines in the cluster using Erlang identifiers (e.g. [email protected]). Erlang resolves these node identifiers to a TCP port on a given machine via the Erlang Port Mapper daemon (epmd) running on each cluster node.

    By default, epmd binds to TCP port 4369 and listens on the wildcard interface. For inter-node communication, Erlang uses an unpredictable port by default; it binds to port 0, which means the first available port.

    For ease of firewall configuration, VerneMQ can be configured to instruct the Erlang interpreter to use a limited range of ports. For example, to restrict the range of ports that Erlang will use for inter-Erlang node communication to 6000-7999, add the following lines to vernemq.conf on each VerneMQ node:

    erlang.distribution.port_range.minimum = 6000
    erlang.distribution.port_range.maximum = 7999

    The settings above are only used for distributing subscription updates and maintenance messages. For distributing the 'real' MQTT messages the proper vmq listener must be configured in the vernemq.conf.

    listener.vmq.clustering = 0.0.0.0:44053

    It isn't necessary to configure the same port on every machine, as the nodes will probe each other for this information.

    Attributions:

    This section, "VerneMQ Inter-node Communication", is a derivative of Security and Firewalls by Riak, used under Creative Commons Attribution 3.0 Unported License.

    Erlang Boilerplate

    We recommend to use the rebar3 toolchain to generate the basic Erlang OTP application boilerplate and start from there.

    rebar3 new app name="myplugin" desc="this is my first VerneMQ plugin"
    ===> Writing myplugin/src/myplugin_app.erl
    ===> Writing myplugin/src/myplugin_sup.erl
    ===> Writing myplugin/src/myplugin.app.src
    ===> Writing myplugin/rebar.config
    ===> Writing myplugin/.gitignore
    ===> Writing myplugin/LICENSE
    ===> Writing myplugin/README.md

    Change the rebar.config file to include the vernemq_dev dependency:

    {erl_opts, [debug_info]}.
    {deps, [{vernemq_dev,
        {git, "git://github.com/vernemq/vernemq_dev.git", {branch, "master"}}}
    ]}.

    Compile the application, this will automatically fetch vernemq_dev.

    rebar3 compile                             
    ===> Verifying dependencies...
    ===> Fetching vmq_commons ({git,
                                          "git://github.com/vernemq/vernemq_dev.git",
                                          {branch,"master"}})
    ===> Compiling vernemq_dev
    ===> Compiling myplugin

    Now you're ready to implement the hooks. Don't forget to add the proper vmq_plugin_hooks entries to your src/myplugin.app.src file.

    For a complete example, see the vernemq_demo_plugin.

    Mountpoints

    Normally, an MQTT broker hosts one single topic tree. This means that all topics are accessible to all publishers and subscribers (limited by the ACLs you configured, of course). Mountpoints are a way to host multiple topic trees in a single broker. They are completely separated and clients with different topic trees cannot publish messages to each other. This could be useful if you provide MQTT services to multiple separated use cases/verticals or clients, with a single broker. Note that mountpoints are configured via different listeners. As a consequence, the MQTT clients will have to connect to a specific port to connect to a specific topic space (mountpoint).

    The mountpoints can be configured on the protocol level or configured or overridden on the specific listener level.

    Allowed protocol versions

    Since VerneMQ 1.5.0 it is possible to configure which MQTT protocol versions as listener will accept.

    VerneMQ supports MQTT 3.1, 3.1.1, and 5.0 (since VerneMQ 1.6.0). To allow these protocol versions, set:

    Here 3,4,5 are the protocol level versions corresponding to MQTT 3.1, 3.1.1 and 5.0 respectively. The default value is 3,4 thus allowing MQTT 3.1 and 3.1.1, while MQTT 5.0 is disabled.

    Sample Config

    Listen on TCP port 1883 and for WebSocket Connections on port 8888:

    An additional listener can be added by using a different name. In the example above the name equals to default and can be used for further configuring this particular listener. The following example demonstrates how an additional listener is defined as well as how the maximum number of connections can be limited for this listener:

    PROXY protocol

    VerneMQ listeners can be configured to accept connections from a proxy server that supports the PROXY protocol. This enables VerneMQ to retrieve peer information such as source IP/Port but also PROXY Version 2 protocol TLS client certificate details if the proxy was used to terminate TLS.

    To enable the PROXY protocol for tcp listeners use listener.tcp.proxy_protocol = on or for a specific listener use listener.tcp.LISTENER.proxy_protocol = on.

    If client certificates are used you can set listener.tcp.proxy_protocol_use_cn_as_username = on which will overwrite the MQTT username set by the client with the common name from the client certificate before authentication and authorization is performed.

    Timeout Settings

    VerneMQ listeners timeouts can be configured to suit all connection speeds. This enables VerneMQ to adapt to constrained devices with limited computing power.

    SSL/TLS Support

    VerneMQ supports different Transport Layer Security (TLS) options, which allow for secure communication between MQTT clients and VerneMQ.

    TLS provides secure communication between devices by encrypting the data in transit, preventing unauthorized access and ensuring the integrity of the data. VerneMQ supports various TLS options, including the use of certificates, mutual authentication, Pre-Shared Keys and the ability to specify specific ciphersuites and TLS versions.

    VerneMQ supports the following the TLS-flavours:

    • Server Side TLS

    • TLS-PSK

    • Mutal TLS (mTLS)

    In server-side TLS, the client initiates a TLS handshake with the broker, and the broker responds by sending its certificate. The client verifies the certificate and generates a symmetric key, which is used to encrypt and decrypt data exchanged between the client and broker. Server-side TLS does no further authentication or authorization of the client. The broker later on authenticates and authorizes clients through MQTT.

    TLS-PSK (Pre-Shared Key) secures communication between MQTT client and broker using pre-shared keys for authentication. Unlike Service-Side or mutal TLS, which use certificates to authenticate the server and client, TLS-PSK uses a pre-shared secret (a key) to authenticate the endpoints. Clients that support TLS-PSK can use the specified pre-shared keys to authenticate themselves to the broker, providing a lightweight alternative to certificate-based authentication. The key has to be securely stored on the MQTT device.

    Mutal TLS (mTLS) provides mutual authentication and encryption of data in transit between MQTT client and Broker. Unlike Server-Side TLS, where only the server is authenticated to the client, mTLS requires both the client and server to authenticate each other before establishing a secure connection.

    The decision to use TLS, TLS-PSK, or mTLS depends on your specific use case and security requirements.

    Sample SSL Config

    Service-Side TLS

    Accepting SSL connections on port 8883:

    TLS-PSK (Pre-Shared Keys)

    The following configuration snippet enables TLS-PSK authentication on VerneMQ's SSL listener, specifies the location of the pre-shared key file, and sets the list of ciphers to be used for encryption. Clients that support TLS-PSK can use the specified pre-shared keys to authenticate themselves to the broker, providing a lightweight alternative to certificate-based authentication.

    This configuration snippet enables TLS-PSK authentication on the VerneMQs SSL listener, specifies the location of the pre-shared key file, and sets the list of ciphers to be used for encryption. Clients that support TLS-PSK can use the specified pre-shared keys to authenticate themselves to the broker, providing a lightweight alternative to certificate-based authentication.

    The PSK file contains a list of matching identifiers and psk keys.

    Mutal TLS (mTLS)

    If you want to use client certificates to authenticate your clients you have to set the following option:

    If you use client certificates and want to use the certificates CN value as a username you can set:

    Both options require_certificate and use_identity_as_username default to off. mTLS can work with additional MQTT-based authentication (username and password) or without. In case you want to use only mTLS-based authentication you need to enable allow_anonymous (global) or allow_anonymous_override (listener).

    WebSocket

    The same configuration options can be used for securing WebSocket connections, just use wss as the protocol identifier e.g. listener.wss.require_certificate.

    With SSL, you still need to configure authentication and authorization! That is, set allow_anonymous to off, and configure vmq_acl and vmq_passwd or your authentication plugin.

    The default listener listener.vmq.clustering is used for distributing MQTT messages among the cluster nodes.

    Listeners configured with the vmq-admin listener command will not survive a broker restart. Live changes to listeners configured in vernemq.conf are possible, but the vernemq.conf listeners will just be restarted with a broker restart.

    Status of all listeners

    You can retrieve additional information by adding the --tls or --mqtt switch. See

    for more information.

    Starting a new listener

    This will start an MQTT listener on port 1884 and IP address 192.168.1.50. If you want to start a WebSocket listener, just tell VerneMQ by adding the --websocket flag. There are more options, mainly for configuring SSL (use vmq-admin listener start --help).

    You can isolate client connections accepted by a certain listener from other clients by setting a mountpoint.

    To start an MQTT listener using defaults, just set the port and IP address as a minimum.

    Stopping a listener

    A stopped listener will not accept new connections, but continue existing sessions. You can add the -k or --kill_sessions switch to that command. This will disconnect all client connections setup by that listener. In combination with a mountpoint, this can be useful for terminating clients for a specific application, or to force re-connects to another cluster node (to prepare for a cluster leave for your node).

    Restarting a stopped listener

    Deleting a stopped listener

    label:

    All available labels can be show using vmq-admin metrics show --help.

    vmq-admin metrics show
    vmq-admin metrics show -d
    # The number of AUTH packets received.
    counter.mqtt_auth_received = 0
    
    # The number of times a MQTT queue process has been initialized from offline storage.
    counter.queue_initialized_from_storage = 0
    
    # The number of PUBLISH packets sent.
    counter.mqtt_publish_sent = 10
    
    # The number of bytes used for storing retained messages.
    gauge.retain_memory = 21184
    Graphite
    MQTT $SYS tree
    Prometheus

    One can use the openssl s_client tool to verify that that the new certificate has been deployed:

    Running sessions and certificate validity

    Unless the client is implemented otherwise, all active connection will remain active. Please note, that TCP is designed in a way that the validity is checked during the TLS/SSL handshake, which happens once at the beginning of the session. Running sessions are not affected by an expired certificate.

    In case you want to invalidate all existing connections it is recommended to stop/start the listener.

    If you generally want to force your clients to reconnect after a specified period of time you can configure a maximum connection lifetime, after which a client is disconnected by the broker.

    listener.ssl.cafile = /etc/ssl/cacerts.pem
    listener.ssl.certfile = /etc/ssl/cert.pem
    listener.ssl.keyfile = /etc/ssl/key.pem
    
    listener.ssl.default = 127.0.0.1:8883
    vmq-admin tls clear-pem-cache
    openssl s_client -host 127.0.0.1 -port 8883

    The feature and the interval can be changed at runtime using the vmq-admin script. Usage: vmq-admin set = ... [[--node | -n] | --all] Example: vmq-admin set systree_interval=60000 -n [email protected]

    Examples:

    systree_interval = 20000
    here
    here.
    systree_enabled = off
    mosquitto_sub -t '$SYS/<node-name>/#' -u <username> -P <password> -d
    # defines the default nr of allowed concurrent 
    # connections per listener
    listener.max_connections = 10000
    
    # defines the nr. of acceptor processes waiting
    # to concurrently accept new connections
    listener.nr_of_acceptors = 10
    
    # used when clients of a particular listener should
    # be isolated from clients connected to another 
    # listener.
    listener.mountpoint = off
    listener.ssl.mountpoint = ssl-mountpoint
    
    listener.tcp.listener1.mountpoint = tcp-listener1
    listener.tcp.listener2.mountpoint = tcp-listener2
    listener.tcp.allowed_protocol_versions = 3,4,5
    listener.tcp.default = 127.0.0.1:1883
    listener.ws.default = 127.0.0.1:8888
    listener.tcp.my_other = 127.0.0.1:18884
    listener.tcp.my_other.max_connections = 100
    listener.ssl.my_listener.tls_handshake_timeout = 8000
    mqtt.connect.timeout = 30000
    listener.ssl.cafile = /etc/ssl/cacerts.pem
    listener.ssl.certfile = /etc/ssl/cert.pem
    listener.ssl.keyfile = /etc/ssl/key.pem
    
    listener.ssl.default = 127.0.0.1:8883
    listener.ssl.psk_support = on
    listener.ssl.pskfile = /srv/vernemq/etc/vernemq.psk
    listener.ssl.ciphers  = PSK-AES256-GCM-SHA384:PSK-AES256-CBC-SHA:PSK-AES128-GCM-SHA256
    listener.ssl.my_listener.require_certificate = on
    listener.ssl.my_listener.use_identity_as_username = on
    listener.ssl.my_listener.allow_anonymous_override = on
    vmq-admin listener show
        +----+-------+------------+-----+----------+---------+
        |type|status |     ip     |port |mountpoint|max_conns|
        +----+-------+------------+-----+----------+---------+
        |vmq |running|192.168.1.50|44053|          |  30000  |
        |mqtt|running|192.168.1.50|1883 |          |  30000  |
        +----+-------+------------+-----+----------+---------+
    `
    vmq-admin listener show --help 
    vmq-admin listener start address=192.168.1.50 port=1884 --mountpoint /test --nr_of_acceptors=10 --max_connections=1000
    vmq-admin listener stop address=192.168.1.50 port=1884
    vmq-admin listener restart address=192.168.1.50 port=1884
    vmq-admin listener delete address=192.168.1.50 port=1884
    mqtt_connack_not_authorized_sent
    mqtt_connack_bad_credentials_sent
    mqtt_connack_server_unavailable_sent
    mqtt_connack_identifier_rejected_sent
    mqtt_connack_unacceptable_protocol_sent
    mqtt_connack_accepted_sent
    vmq-admin metrics show --return_code=not_authorized
    counter.mqtt_connack_sent = 0
    vmq-admin listener stop <your listener>
    vmq-admin listener start <your listener>
    listener.ssl.default.max_connection_lifetime = 25000
    Cluster Overview
  • Node Status

  • The Status Page will automatically refresh itself every 10 seconds, and try to calculate rates in Javascript, based on that reload window. Therefore, the displayed rates might be slightly inaccurate. The Status Page should not be considered a replacement for a metrics system. Running in production, you certainly want to hook up VerneMQ to a metrics system like Prometheus.

    HTTP listeners

    Consumer session balancing

    MQTT consumers can share and loadbalance a topic subscription.

    Consumer session balancing has been deprecated and will be removed in VerneMQ 2.0. Use Shared Subscriptions instead.

    Sometimes consumers get overwhelmed by the number of messages they receive. VerneMQ can load balance between multiple consumer instances subscribed to the same topic with the same ClientId.

    Enabling Session Balancing

    To enable session balancing, activate the following two settings in vernemq.conf

    Currently those settings will activate consumer session balancing globally on the respective node. Restricting balancing to specific consumers only, will require a plugin. Note that you cannot balance consumers spread over different cluster nodes.

    Dealing with Netsplits

    How does VerneMQ deals with Network Partitions aka. Netsplits.

    This section elaborates how a VerneMQ cluster deals with network partitions (aka. netsplit or split brain situation). A netsplit is mostly the result of a failure of one or more network devices resulting in a cluster where nodes can no longer reach each other.

    VerneMQ is able to detect a network partition, and by default it will stop serving CONNECT, PUBLISH, SUBSCRIBE, and UNSUBSCRIBE requests. A properly implemented client will always resend unacked commands and messages are therefore not lost (QoS 0 publishes will be lost). However, the time window between the network partition and the time VerneMQ detects the partition much can happen. Moreover, this time frame will be different on every participating cluster node. In this guide we're referring to this time frame as the Window of Uncertainty.

    The behaviour during a netsplit is completely configurable via allow_register_during_netsplit, allow_publish_during_netsplit, allow_subscribe_during_netsplit, and allow_unsubscribe_during_netsplit. These options supersede the trade_consistency option. In order to reach the same behaviour as trade_consistency = on all the mentioned netsplit options have to set to on.

    Possible Scenario for Message Loss:

    VerneMQ follows an eventually consistent model for storing and replicating the subscription data. This also includes retained messages.

    Due to the eventually consistent data model it is possible that during the Window of Uncertainty a publish won't take into account a subscription made on a remote node (in another partition). Obviously, VerneMQ can't deliver the message in this case. The same holds for delivering retained messages to remote subscribers.

    last will messages that are triggered during the Window of Uncertainty will be delivered to the reachable subscribers. Currently during a netsplit, but after the Window of Uncertainty last will messages will be lost.

    Possible Scenario for Duplicate Clients:

    Normally, client registration is synchronized using an elected leader node for the given client id. Such a synchronization removes the race condition between multiple clients trying to connect with the same client id on different nodes. However, during the Window of Uncertainty it is currently possible that VerneMQ fails to disconnect a client connected to a different node. Although this scenario sounds like artificially crafted it is possible to end up with duplicate clients connected to the cluster.

    Recovering from a Netsplit

    As soon as the partition is healed, and connectivity reestablished, the VerneMQ nodes replicate the latest changes made to the subscription data. This includes all the changes 'accidentally' made during the Window of Uncertainty. Using VerneMQ ensures that convergence regarding subscription data and retained messages is eventually reached.

    Output Format

    Changing the output format of CLI commands

    Default Output Format

    The default output format is called human-readable. It will print tables or text answers in response to your CLI commands.

    JSON Output Format

    The only alternative format is JSON. You can request it by adding the --format=json key to a command.

    To pretty-print your JSON or extract the table object, use the jq command. Currently, not all responses give you a nice table and attributes format. Namely, vmq-admin metrics show will only give the metrics as text.

    Shared subscriptions

    Working with shared subscriptions

    A shared subscription is a mechanism for distributing messages to a set of subscribers to shared subscription topic, such that each message is received by only one subscriber. This contrasts with normal subscriptions where each subscriber will receive a copy of the published message.

    A shared subscription is on the form $share/sharename/topic and subscribers to this topic will receive messages published to the topic topic. The messages will be distributed according to the defined distribution policy.

    The MQTT spec only defines shared subscriptions for protocol version 5. VerneMQ supports shared subscription for v5 (as per the specification) and for v3.1.1 (backported feature).

    When subscribing to a shared subscription using command line tools remember to quote the topic as some command line shells, like bash, will otherwise expand the $share part of the topic as an environment variable.

    Configuration

    Currently four message distribution policies for shared subscriptions are supported: prefer_local, random, local_only and prefer_online_before_local. Under the random policy messages will be published to a random member of the shared subscription, if any exist. Under the prefer_local policy messages will be delivered to a random node-local member of the shared subscription, if none exist, the message will be delivered to a random member of the shared subscription on a remote cluster node. The prefer_online_before_local policy works similar to prefer_local, but will look for an online subscriber on a non-local node, if there are only offline subscribers on the local one. Under the local_only policy message will be delivered to a random node-local member of the shared subscription.

    When a messages is being delivered to subscribers of a shared subscription, the message will be delivered to an online subscriber if possible, otherwise the message will be delivered to an offline subscriber.

    Note that Shared Subscriptions still fully operate under the MQTT specification (be it MQTT 5.0 or backported to older protocol versions). Be aware of this, especially regarding QoS and clean_session configurations. This also means that there is no shared offline message queue for all clients, but each client has its own offline message queue. MQTT v5 shared subscriptions thus have a different behaviour than e.g. Kafka where consumers read from a single shared message queue.

    Examples

    Subscriptions Note: When subscribing to a shared topic, make sure to escape the $

    So, for dash or bash shells

    Publishing Note: When publishing to a shared topic, do not include the prefix $share/group/ as part of the publish topic name

    Storage

    VerneMQ uses Google's LevelDB as a fast storage backend for messages and subscriber information. Each VerneMQ node runs its own embedded LevelDB store.

    Configuration of LevelDB memory

    There's not much you need to know about LevelDB and VerneMQ. One really important thing to note is that LevelDB manages its own memory. This means that VerneMQ will not allocate and free memory for LevelDB. Instead, you'll have to tell LevelDB how much memory it can use up by setting leveldb.maximum_memory.percent.

    Configuring LevelDB memory:

    leveldb.maximum_memory.percent = 20

    LevelDB means business with its allocated memory. It will eventually end up with the configured max, making it look like there's a memory leak, or even triggering OOM kills. Keep that in mind when configuring the percentage of RAM you give to LevelDB. Historically, the configured default was at 70% percent of RAM, which is too high for a lot of use cases and can be safely lowered.

    Advanced options

    (e)LevelDB exposes a couple of additional configuration values that we link here for the sake of completeness. You can change all the values mentioned in the. VerneMQ mostly uses the configured defaults, and for most use cases it should not be necessary to change those.

    Retained messages

    Inspecting the retained message store

    To list the retained messages simply invoke vmq-admin retain show:

    $ vmq-admin retain show
    +------------------+----------------+
    |     payload      |     topic      |
    +------------------+----------------+
    | a-third-message  | a/third/topic  |
    |some-other-message|some/other/topic|
    |    a-message     |   some/topic   |
    |    a-message     | another/topic  |
    +------------------+----------------+

    Note, by default a maximum of 100 results are returned. This is a mechanism to protect the from overload as there can be millions of retained messages. Use --limit=<RowLimit> to override the default value.

    Besides listing the retained messages it is also possible to filter them:

    $ vmq-admin retain show --payload --topic=some/topic
    +---------+
    | payload |
    +---------+
    |a-message|
    +---------+

    In the above example we list only the payload for the topic some/topic.

    Another example where all topics are list with retained messages with a specific payload:

    See the full set of options and documentation by invoking vmq-admin retain show --help:

    Clustering during development

    This describes a quick way to create a VerneMQ cluster on developer's machines

    Sometimes you want to have a quick way to test a cluster on your development machine as a VerneMQ developer.

    You need to take care of a couple things if you want to run multiple VerneMQ instances on the same machine. There is a make option that let's you build multiple releases, as a commodity, taking care of all the configuration.

    First, build a normal release (this is just needed the first time) with:

    ➜ default git:(master) ✗ make rel

    The following command will then prepare 3 correctly configured vernemq.conf files, with different ports for the MQTT listeners etc. It will also build 3 full VerneMQ releases.

    ➜ default git:(master) ✗ make dev1 dev2 dev3

    Check if you have the 3 new releases in the _build directory of your VerneMQ code repo.

    You can then start the respective broker instances in 3 terminal windows, by using the respective commands and directory paths. Example:

    ➜ (_build/dev2/rel/vernemq/bin) ✗ vernemq console

    The MQTT listeners will of course be configured differently for each node (the default 1883 port is not used, so that you can still run a default MQTT broker besides your dev nodes). A couple of other ports are also adapted (HTTP status page, cluster communication). The MQTT ports are automically configured in increasing steps of 50: (if in doubt, consult the respective vernemq.conf files)

    Node
    MQTT listener port

    Note that the dev nodes are not automatically clustered. You still need to manually cluster them with commands like the following:

    ➜ (_build/dev2/rel/vernemq/bin) ✗ vmq-admin cluster join [email protected]

    In case this wasn't clear so far: You can configure an arbitrary number of cluster nodes, from dev1 to devn.

    Prometheus

    Description and Configuration of the Prometheus exporter

    The Prometheus exporter is enabled by default and installs an HTTP handler on http://localhost:8888/metrics. To read more about configuring the HTTP listener, see HTTP Listener Configuration.

    Example Scrape Config

    Add the following configuration to the scrape_configs section inside prometheus.yml of your Prometheus server.

    # A scrape configuration containing exactly one endpoint to scrape: 
    # Here it's Prometheus itself.
    scrape_configs:
      - job_name: 'vernemq'
        scrape_interval: 5s
        scrape_timeout: 5s
        static_configs:
          - targets: ['localhost:8888']

    This tells Prometheus to scrape the VerneMQ metrics endpoint every 5 seconds.

    Please follow the documentation on the website to properly configure the metrics scraping as well as how to access those metrics and configure alarms and graphs.

    Publish Flow

    In this section the publish flow is described. VerneMQ provides multiple hooks throughout the flow of a message. The most important ones are the auth_on_publish and auth_on_publish_m5 hooks which acts as an application level firewall granting or rejecting a publish message.

    auth_on_publish and auth_on_publish_m5

    The auth_on_publish and auth_on_publish_m5 hooks allow your plugin to grant or reject publish requests sent by a client. It also enables to rewrite the publish topic, payload, qos, or retain flag and in the case of auth_on_publish_m5 properties. The auth_on_publish hook is specified in the Erlang behaviour and the auth_on_publish_m5 hook in the behaviour available in the repo.

    Every plugin that implements the auth_on_publish or auth_on_publish_m5 hooks are part of a conditional plugin chain. For this reason we allow the hook to return different values. In case the plugin can't validate the publish message it is best to return next as this would allow subsequent plugins in the chain to validate the request. If no plugin is able to validate the request it gets automatically rejected.

    on_publish and on_publish_m5

    The on_publish and on_publish_m5 hooks allow your plugin to get informed about an authorized publish message. The hook is specified in the Erlang behaviour and the on_publish_m5 hook in the behaviour available in the repo.

    on_offline_message

    The on_offline_message hook allows your plugin to get notified about a new a queued message for a client that is currently offline. The hook is specified in the Erlang behaviour available in the repo.

    on_deliver and on_deliver_m5

    The on_deliver and on_deliver_m5 hooks allow your plugin to get informed about outgoing publish messages, but also allows you to rewrite topic and payload of the outgoing message. The hook is specified in the Erlang behaviour and the on_deliver_m5 hook in the behaviour available in the repo.

    Every plugin that implements the on_deliver or on_deliver_m5 hooks are part of a conditional plugin chain, although NO verdict is required in this case. The message gets delivered in any case. If your plugin uses this hook to rewrite the message the plugin system stops evaluating subsequent plugins in the chain.

    Subscribe Flow

    In this section the subscription flow is described. VerneMQ provides several hooks to intercept the subscription flow. The most important ones are the auth_on_subscribe and auth_on_subscribe_m5 hooks which act as an application level firewall granting or rejecting subscribe requests.

    auth_on_subscribe and auth_on_subscribe_m5

    The auth_on_subscribe and auth_on_subscribe_m5 hooks allow your plugin to grant or reject subscribe requests sent by a client. They also makes it possible to rewrite the subscribe topic and qos. The auth_on_subscribe hook is specified in the Erlang behaviour and the auth_on_subscribe hook in the behaviour available in the repo.

    on_subscribe and on_subscribe_m5

    The on_subscribe and on_subscribe_m5 hooks allow your plugin to get informed about an authorized subscribe request. The on_subscribe hook is specified in the Erlang behaviour and the on_subscribe_m5 hook in the behaviour available in the repo.

    on_unsubscribe and on_unsubscribe_m5

    The on_unsubscribe and on_unsubscribe_m5 hooks allow your plugin to get informed about an unsubscribe request. They also allow you to rewrite the unsubscribe topic if required. The on_subscribe hook is specified in the Erlang behaviour and the on_unsubscribe_m5 hook in the behaviour available in the repo.

    Auth using files

    Authentication

    VerneMQ comes with a simple file-based password authentication mechanism which is enabled by default. If you don't need this it can be disabled by setting:

    Per default VerneMQ doesn't accept any client that hasn't been configured using vmq-passwd. If you want to change this and accept any client connection you can set:

    Advanced Options

    Configure a couple of hidden options for VerneMQ

    There are a couple of hidden options you can set in the vernemq.conf file. Hidden means that you have to add and set the value explicitly. Hidden options still have default values. Changing them should be considered advanced, possibly with the exception of setting a max_message_rate.

    Queue Deliver mode

    Specify how the queue should deliver messages when multiple sessions are allowed. In case of fanout all the attached sessions will receive the message, in case of balance

    Inspecting and managing sessions

    Inspecting and managing MQTT sessions

    Inspecting sessions

    VerneMQ comes with powerful tools for inspecting the state of MQTT sessions. To list current MQTT sessions simply invoke vmq-admin session show:

    To see detailed information about the command see vmq-admin session show --help.

    The command is able to show a lot of different information about a client, for example the client id, the peer host and port if the client is online or offline and much more, see vmq-admin session show --help for details. Further the information can also be used to filter information which is very helpful when wanting to narrow down the information to a single client.

    Session lifecycle

    VerneMQ provides multiple hooks throughout the lifetime of a session. The most important ones are the auth_on_register and auth_on_register_m5 hooks which act as an application level firewall granting or rejecting new clients.

    auth_on_register and auth_on_register_m5

    The auth_on_register and

    [email protected]

    10053

    [email protected]

    10103

    [email protected]

    10153

    ...

    ...

    Dotted Version Vectors
    eleveldb schema file
    Prometheus
    auth_on_publish_hook
    auth_on_publish_m5_hook
    vernemq_dev
    on_publish_hook
    on_publish_m5_hook
    vernemq_dev
    on_offline_message_hook
    vernemq_dev
    on_deliver_hook
    on_deliver_m5_hook
    vernemq_dev
    auth_on_subscribe_hook
    auth_on_subscribe_m5_hook
    vernemq_dev
    on_subscribe_hook
    on_subscribe_m5_hook
    vernemq_dev
    on_unsubscribe_hook
    on_unsubscribe_m5_hook
    vernemq_dev
    auth_on_register_m5
    hooks allow your plugin to grant or reject new client connections. Moreover it lets you exert fine grained control over the configuration of the client session. The
    auth_on_register
    hook is specified in the Erlang behaviour
    and the auth_on_register_m5 hook in the
    behaviour available in the
    repo.

    Every plugin that implements the auth_on_register or auth_on_register_m5 hooks are part of a conditional plugin chain. For this reason we allow the hook to return different values depending on how the plugin grants or rejects this client. In case the plugin doesn't know the client it is best to return next as this would allow subsequent plugins in the chain to validate this client. If no plugin is able to validate the client it gets automatically rejected.

    on_auth_m5

    The on_auth_m5 hook allows your plugin to implement MQTT enhanced authentication, see Enhanced Authentication Flow.

    on_register and on_register_m5

    The on_register and on_register_m5 hooks allow your plugin to get informed about a newly authenticated client. The hook is specified in the Erlang behaviour on_register_hook and the on_register_m5_hook behaviour available in the vernemq_dev repo.

    on_client_wakeup

    Once a new client was successfully authenticated and the above described hooks have been called, the client attaches to its queue. If it is a returning client using clean_session=false or if the client had previous sessions in the cluster, this process could take a while. (As offline messages are migrated to a new node, existing sessions are disconnected). The on_client_wakeup hook is called at the point where a queue has been successfully instantiated, possible offline messages migrated, and potential duplicate sessions have been disconnected. In other words: when the client has reached a completely initialized, normal state for accepting messages. The hook is specified in the Erlang behaviour on_client_wakeup_hook available in the vernemq_dev repo.

    on_client_offline

    This hook is called if an MQTT 3.1/3.1.1 client using clean_session=false or an MQTT 5.0 client with a non-zero session_expiry_interval closes the connection or gets disconnected by a duplicate client. The hook is specified in the Erlang behaviour on_client_offline_hook available in the vernemq_dev repo.

    on_client_gone

    This hook is called if an MQTT 3.1/3.1.1 client using clean_session=true or an MQTT 5.0 client with the session_expiry_interval set to zero closes the connection or gets disconnected by a duplicate client. The hook is specified in the Erlang behaviour on_client_gone_hook available in the vernemq_dev repo.

    auth_on_register_hook
    auth_on_register_m5_hook
    vernemq_dev
     allow_multiple_sessions = on
     queue_deliver_mode = balance
    vmq-admin listener show --format=json
    {"table":[{"type":"vmq","status":"running","address":"0.0.0.0","port":"44053","mountpoint":"","max_conns":10000,"active_conns":0,"all_conns":0},{"type":"mqtt","status":"running","address":"127.0.0.1","port":"1883","mountpoint":"","max_conns":10000,"active_conns":0,"all_conns":0},{"type":"mqttws","status":"running","address":"127.0.0.1","port":"1887","mountpoint":"","max_conns":10000,"active_conns":0,"all_conns":0},{"type":"http","status":"running","address":"127.0.0.1","port":"8888","mountpoint":"","max_conns":10000,"active_conns":0,"all_conns":0}],"type":"table"}%
    vmq-admin listener show --format=json | jq '.table'
    Random Message Routing for Shared Subscribers
    Local Only Message Routing for Shared Subscribers
    Prefer_local Message Routing for Shared Subscribers
    $ vmq-admin retain show --payload a-message --topic
    +-------------+
    |    topic    |
    +-------------+
    | some/topic  |
    |another/topic|
    +-------------+
    $ sudo vmq-admin retain --help
    Usage: vmq-admin retain show
    
      Show and filter MQTT retained messages.
    
    Default options:
      --payload --topic
    
    Options
    
      --limit=<NumberOfResults>
          Limit the number of results returned. Defaults is 100.
      --payload
      --topic
      --mountpoint

    Warning: Setting allow_anonymous=on completely disables authentication in the broker and plugin authentication hooks are never called! Find more information on the authentication hooks here.

    In a production setup you can use the provided password based authentication mechanism, one of the provided authentication Database plugins, or implement your own authentication plugins.

    VerneMQ periodically checks the specified password file.

    The check interval defaults to 10 seconds and can also be defined in the vernemq.conf.

    Setting the password_reload_interval = 0 disables automatic reloading.

    Both configuration parameters can also be changed at runtime using the vmq-admin script.

    Example: to dynamically set the reload interval to 60 seconds on all your cluster nodes, issue the following command on one of the nodes:

    sudo vmq-admin set vmq_passwd.password_reload_interval=60 --all

    Manage Password Files for VerneMQ

    vmq-passwd is a tool for managing password files for the VerneMQ broker. Usernames must not contain ":", passwords are stored in similar format to crypt(3).

    How to use vmq-passwd

    Options

    -c

    Creates a new password file. Does not overwrite existing file.

    -cf

    Creates a new password file. If the file already exists, it will be overwritten.

    <no option>

    When run with no option, It will create a new user and password and append it to the password file if exists. Does not overwrite the existing file

    -D

    Deletes the specified user from the password file.

    -U

    This option can be used to upgrade/convert a password file with plain text passwords into one using hashed passwords. It will modify the specified file. It does not detect whether passwords are already hashed, so using it on a password file that already contains hashed passwords will generate new hashes based on the old hashes and render the password file unusable. Note, with this option neither usernames or passwords may contain ":".

    passwordfile

    The password file to modify.

    username

    The username to add/update/delete.

    Examples

    Add a user to a new password file: (you can choose an arbitrary name for the password file, it only has to match the configuration in the VerneMQ configuration file).

    Delete a user from a password file

    Add multiple user to the existing password file :

    Acknowledgements

    The original version of vmq-passwd was developed by Roger Light ([email protected]).

    vmq-passwd includes :

    • software developed by the [OpenSSL

      Project](http://www.openssl.org/) for use in the OpenSSL Toolkit.

    • cryptographic software written by Eric Young

      ([email protected])

    • software written by Tim Hudson ([email protected])

    Authorization

    VerneMQ comes with a simple ACL based authorization mechanism which is enabled by default. If you don't need this it can be disabled by setting:

    VerneMQ periodically checks the specified ACL file.

    The check interval defaults to 10 seconds and can also be defined in the vernemq.conf.

    Setting the acl_reload_interval = 0 disables automatic reloading.

    Both configuration parameters can also be changed at runtime using the vmq-admin script.

    Managing the ACL entries

    Topic access is added with lines of the format:

    The access type is controlled using read or write. If not provided then read an write access is granted for the topic. The topic can use the MQTT subscription wildcards + or #.

    The first set of topics are applied to all anonymous clients (assuming allow_anonymous = on). User specific ACLs are added after a user line as follows (this is the username not the client id):

    It is also possible to define ACLs based on pattern substitution within the the topic. The form is the same as for the topic keyword, but using pattern as the keyword.

    The patterns available for substitution are:

    • %c to match the client id of the client

    • %u to match the username of the client

    The substitution pattern must be the only text for that level of hierarchy. Pattern ACLs apply to all users even if the user keyword has previously been given.

    Example:

    VerneMQ currently doesn't cancel active subscriptions in case the ACL file revokes access for a topic. It is possible to reauthenticate sessions manually (vmq-admin)

    Simple ACL Example

    Anonymous users are allowed to

    • publish & subscribe to topic bar.

    • publish to topic foo.

    • subscribe to topic open_to_all.

    User john is allowed to

    • publish & subscribe to topic foo.

    • subscribe to topic baz.

    • publish to topic open_to_all.

    an attached session is chosen randomly.

    The feature to enable multiple sessions will be deprecated in VerneMQ 2.0.

    Queue Type

    Specify how queues should process messages, either the fifo or lifo way, with a default setting of fifo. The setting will apply globally, that is, for every spawned queue in a VerneMQ broker. (You can override the queue_type setting in plugins in the auth_on_register hook).

    Max Message Rate

    Specifies the maximum incoming publish rate per session per second. Depending on the underlying network buffers this rate isn't enforced. Defaults to 0, which means no rate limits apply. Setting to a value of 2 limits any publisher to 2 messages per second, for instance.

    Max Drain Time

    Due to the eventually consistent nature of the subscriber store it is possible that during queue migration messages still arrive on the old cluster node. This parameter enables compensation for that fact by keeping the queue around for some configured time (in seconds) after it was migrated to the other cluster node.

    Max Msgs per Drain Step

    Specifies the number of messages that are delivered to the remote node per drain step. A large value will provide a faster migration of a queue, but increases the waste of bandwidth in case the migration fails.

    Default Reg View

    Allows to select a new default reg_view. A reg_view is a pre-defined way to route messages. Multiple views can be loaded and used, but one has to be selected as a default. The default routing is vmq_reg_trie, i.e. routing via the built-in trie data structure.

    Reg Views

    A list of views that are started during startup. It's only used in plugins that want to choose dynamically between routing reg_views.

    Outgoing Clustering Buffer Size

    An integer specifying how many bytes are buffered in case the remote node is not available. Default is 10000

    Max Connection Lifetime

    Defines the maximum lifetime of MQTT connection in seconds. Max_connection_lifetime can be set per-listener. This is an implementation of MQTT security proposal: "Servers may close the Network Connection of Clients and require them to re-authenticate with new credentials."

    It is possible to override the value in auth_on_register(_m5) to a lower limit.

    A sample query which lists only the node where the client session exists and if the client is online would look like the following:

    Note, by default a maximum of 100 rows are returned from each node in the cluster. This is a mechanism to protect the cluster from overload as there can be millions of MQTT sessions and resulting rows. Use --limit=<RowLimit> to override the default value.

    More examples

    Listing the clients and the subscriptions one can do the following:

    And to list only the clients subscribed to the topic some/topic:

    you can also do a regex search to query a subset of topics:

    A regex search uses the =~ syntax and is currently limited to alpha-numeric searches. Please note that the regex search consumes more load an a node than a regular search.

    To figure out when the queue for a persisted session (clean_session=false) was created and when the client last connected one can use the --queue_started_at and --session_started_at to list the POSIX timestamps (in microseconds):

    Besides the examples above it is also possible to inspect the number of online or offline messages as well as their payloads and much more. See vmq-admin session show --help for an exhaustive list of all the available options.

    Managing sessions

    VerneMQ also supports disconnecting clients and reauthorizing client subscriptions. To disconnect a client and cleanup store messages and remove subscriptions one can invoke:

    See vmq-admin session disconnect --help for more options and details.

    To reauthorize subscriptions for a client issue the following command:

    This works by reapplying the logic in any installed auth_on_subscribe or auth_on_subscribe_m5 plugin hooks to check the validity of the existing topics and removing those that are no longer allowed. In the example above the reauthorization of the client subscriptions resulted in no changes.

    [
      {
        "type": "vmq",
        "status": "running",
        "address": "0.0.0.0",
        "port": "44053",
        "mountpoint": "",
        "max_conns": 10000,
        "active_conns": 0,
        "all_conns": 0
      },
      {
        "type": "mqtt",
        "status": "running",
        "address": "127.0.0.1",
        "port": "1883",
        "mountpoint": "",
        "max_conns": 10000,
        "active_conns": 0,
        "all_conns": 0
      },
      {
        "type": "mqttws",
        "status": "running",
        "address": "127.0.0.1",
        "port": "1887",
        "mountpoint": "",
        "max_conns": 10000,
        "active_conns": 0,
        "all_conns": 0
      },
      {
        "type": "http",
        "status": "running",
        "address": "127.0.0.1",
        "port": "8888",
        "mountpoint": "",
        "max_conns": 10000,
        "active_conns": 0,
        "all_conns": 0
      }
    ]
    shared_subscription_policy = prefer_local
    mosquitto_sub -h mqtt.example.io -p 1883 -q 2 -t \$share/group/topicname
    mosquitto_sub -h mqtt.example.io -p 1883 -q 2 -t \$share/group/topicname/#
    mosquito_pub -h mqtt.example.io -p 1883 -t topicname -m "This is a test message"
    mosquito_pub -h mqtt.example.io -p 1883 -t topicname/group1 -m "This is a test message"
    plugins.vmq_passwd = off
    allow_anonymous = on
    vmq_passwd.password_file = /etc/vernemq/vmq.passwd
    vmq_passwd.password_reload_interval = 10
    vmq-passwd [-c | -D] passwordfile username
    
    vmq-passwd -U passwordfile
    vmq-passwd -c /etc/vernemq/vmq.passwd henry
    vmq-passwd -D /etc/vernemq/vmq.passwd henry
    vmq-passwd /etc/vernemq/vmq.passwd bob
    vmq-passwd /etc/vernemq/vmq.passwd john
    plugins.vmq_acl = off
    vmq_acl.acl_file = /etc/vernemq/vmq.acl
    vmq_acl.acl_reload_interval = 10
    topic [read|write] <topic>
    user <username>
    pattern [read|write] <topic>
    pattern write sensor/%u/data
    # ACL for anonymous clients
    topic bar
    topic write foo
    topic read open_to_all
    
    
    # ACL for user 'john'
    user john
    topic foo
    topic read baz
    topic write open_to_all
    queue_deliver_mode = balance
    queue_type = fifo
    max_message_rate = 2
    max_drain_time = 20
    max_msgs_per_drain_step = 1000
    vmq_reg_view = "vmq_reg_trie"
    reg_views = "[vmq_reg_trie]"
    outgoing_clustering_buffer_size = 10000
    listener.max_connection_lifetime = 25000
    $ vmq-admin session show
    +---------+---------+----------+---------+---------+---------+
    |client_id|is_online|mountpoint|peer_host|peer_port|  user   |
    +---------+---------+----------+---------+---------+---------+
    | client2 |  true   |          |127.0.0.1|  37098  |undefined|
    | client1 |  true   |          |127.0.0.1|  37094  |undefined|
    +---------+---------+----------+---------+---------+---------+
    $ vmq-admin session show --node --is_online --client_id=client1
    +---------+--------------+
    |is_online|     node     |
    +---------+--------------+
    |  true   |[email protected]|
    +---------+--------------+
    $ vmq-admin session show --topic --client_id
    +---------+-----------------+
    |client_id|     topic       |
    +---------+-----------------+
    | client2 |some/other/topic1|
    | client1 |some/other/topic2|
    | client1 |   some/topic    |
    +---------+-----------------+
    $ vmq-admin session show --topic --client_id --topic=some/topic
    +---------+----------+
    |client_id|  topic   |
    +---------+----------+
    | client1 |some/topic|
    +---------+----------+
    $ vmq-admin session show --topic --client_id --topic=~some/other/.*
    +---------+-----------------+
    |client_id|      topic      |
    +---------+-----------------+
    | client2 |some/other/topic1|
    | client1 |some/other/topic |
    +---------+-----------------+
    $ vmq-admin session show --topic --client_id --topic=some/topic
    +---------+----------+
    |client_id|  topic   |
    +---------+----------+
    | client1 |some/topic|
    +---------+----------+
    $ vmq-admin session show --client_id=client1 --queue_started_at --session_started_at
    +----------------+------------------+
    |queue_started_at|session_started_at|
    +----------------+------------------+
    | 1549379963575  |  1549379974905   |
    +----------------+------------------+
    $ vmq-admin session disconnect client-id=client1 --cleanup
    $ vmq-admin session reauthorize username=username client-id=client1
    Unchanged

    Not a tuning guide

    General relation to OS configuration values

    You need to know about and configure a couple of Operating System and Erlang VM configs to operate VerneMQ efficiently. First, make sure you have set appropriate OS file limits according to our guide here. Second, when you run into performance problems, don't forget to check the settings in the vernemq.conf file. (Can't open more than 10k connections? Well, is the listener configured to open more than 10k?)

    TCP buffer sizes

    This is the number one topic to look at, if you need to keep an eye on RAM usage.

    Context: All network I/O in Erlang uses an internal driver. This driver will allocate and handle an internal application side buffer for every TCP connection. The default size of these buffers will determine your overall RAM use.

    VerneMQ calculates the buffer size from the OS level TCP send and receive buffers:

    val(buffer) >= max(val(sndbuf),val(recbuf))

    Those values correspond to net.ipv4.tcp_wmem and net.ipv4.tcp_rmem in your OS's sysctl configuration. One way to minimize RAM usage is therefore to configure those settings (Debian example):

    This would result in a 32KB application buffer for every connection. On a multi-purpose server where you install VerneMQ as a test, you might not want to change your OS's TCP settings, of course. In that case, you can still configure the buffer sizes manually for VerneMQ by using the advanced.config file.

    The advanced.config file

    The advanced.config file is a supplementary configuration file that sits in the same directory as the vernemq.conf. You can set additional config values for any of the OTP applications that are part of a VerneMQ release. To just configure the TCP buffer size manually, you can create an advanced.config file:

    The vm.args file

    For very advanced & custom configurations, you can add a vm.args file to the same directory where the vernemq.conf file is located. Its purpose is to configure parameters for the Erlang Virtual Machine. This will override any Erlang specific parameters your might have configured via the vernemq.conf. Normally, VerneMQ auto-generates a vm.args file for every boot in /var/lib/vernemq/generated.configs/ (Debian package example) from vernemq.conf and other potential configuration sources.

    A manually generated vm.args is not supplementary, it is a full replacement of the auto-generated file! Keep that in mind. An easy way to go about this, is by copying and extending the auto-generated file.

    This is how a vm.args might look like:

    A note on TLS

    Using TLS will of course increase the CPU load during connection setup. Latencies in message delivery will be increased, and your overall message throughput per second will be lower.

    TLS will require considerably more RAM. Instead of 2 Erlang processes per connection, TLS will use 3. You'll have a session process, a queue process, and a TLS handler process that can encapsulate quite a big state (> 30KB).

    Erlang/OTP uses its own TLS implementation, only using OpenSSL for crypto, but not connection handling. For situations with high connection setup rate or overall high connection churn rate, the Erlang TLS implementation might be too slow. On the other hand, Erlang TLS gives you great concurrency & fault isolation for long-lived connections.

    Some Erlang deployments terminate SSL/TLS with an external component or with a load balancer component. Do some testing & try to find out what works best for you.

    The Erlang TLS implementation is rather picky on certificate chains & formats. Don't give up, if you encounter errors first. On Linux, you can find out more with the openssl s_client command quickly.

    Loadtesting VerneMQ

    Loadtesting VerneMQ with vmq_mzbench

    You can loadtest VerneMQ with any MQTT-capable loadtesting framework. Our recommendation is to use a framework you are familiar with, with MQTT plugins or scenarios that suit your needs.

    While MZBench is currently not actively developed, it is still one of the options you can use vmq_mzbench tool. It is based on Machinezone's very powerful original MZBench system, currently available in a community repository here: MZBench system. MZBench lets you narrow down what hardware specs are needed to meet your performance goals. You can state your requirements for latency percentiles (and much more) in a formal way, and let vmq_mzbench automatically fail, if it can't meet the requirements.

    If you have an AWS account, vmq_mzbench can automagically provision worker nodes for you. You can also run it locally, of course.

    1. Install MZBench

    Please follow the

    2. Install vmq_mzbench

    Actually, you don't even have to install vmq_mzbench, if you don't want to. Your scenario file will automatically fetch vmq_mzbench for any test you do. vmq_mzbench runs every test independently, so it has a provisioning step for any test, even if you only run it on a local worker.

    To install vmq_mzbench on your computer, go through the following steps:

    To provision your tests from this local repository, you'll have to tell the scenario scripts to use rsync. Add this to the scenario file:

    If you'd just like the script itself fetch vmq_mzbench, then you can direct it to github:

    3. Write vmq_mzbench scenario files

    MZBench recently switched from an Erlang-styled Scenario DSL to a more python-like DSL dubbed BDL (Benchmark Definition Language). Have a look at the on Github.

    You can familiarize yourself quickly with on writing loadtest scenarios.

    There's not much to learn, just make sure you understand how pools and loops work. Then you can add the vmq_mzbench statement functions to the mix and define actual loadtest scenarios.

    Currently vmq_mzbench exposes the following statement functions for use in MQTT scenario files:

    • random_client_id(State, Meta, I): Create a random client Id of length I

    • fixed_client_id(State, Meta, Name, Id): Create a deterministic client Id with schema Name ++ "-" ++ Id

    • worker_id(State, Meta): Get the internal, sequential worker Id

    It's easy to add more statement functions to the MQTT worker if needed, get in touch with us.

    Running VerneMQ using Docker

    As well as being available as packages that can be installed directly into the operating systems, VerneMQ is also available as a Docker image. Below is an example of how to set up a couple of VerneMQ

    Start a VerneMQ cluster node

    To use the provided docker images the VerneMQ EULA must be accepted. See Accepting the VerneMQ EULA for more information.

    Sometimes you need to configure a forwarding for ports (on a Mac for example):

    This starts a new node that listens on 1883 for MQTT connections and on 8080 for MQTT over websocket connections. However, at this moment the broker won't be able to authenticate the connecting clients. To allow anonymous clients use the DOCKER_VERNEMQ_ALLOW_ANONYMOUS=on environment variable.

    Warning: Setting allow_anonymous=on completely disables authentication in the broker and plugin authentication hooks are never called! See more information about the authentication hooks .

    Autojoining a VerneMQ cluster

    This allows a newly started container to automatically join a VerneMQ cluster. Assuming you started your first node like the example above you could autojoin the cluster (which currently consists of a single container 'vernemq1') like the following:

    (Note, you can find the IP of a docker container using docker inspect <CONTAINER_NAME> | grep \"IPAddress\").

    Checking cluster status

    To check if the above containers have successfully clustered you can issue the vmq-admin command:

    Loadtesting VerneMQ

    Loadtesting VerneMQ with vmq_mzbench

    You can loadtest VerneMQ with our . It is based on Machinezone's very powerful and lets you narrow down what hardware specs are needed to meet your performance goals. You can state your requirements for latency percentiles (and much more) in a formal way, and let vmq_mzbench automatically fail, if it can't meet the requirements.

    If you have an AWS account, vmq_mzbench can automagically provision worker nodes for you. You can also run it locally, of course.

    1. Install MZBench

    Please follow the

    Change Open File Limits

    A guide that shows how to change the open file limtits

    VerneMQ can consume a large number of open file handles when thousands of clients are connected as every connection requires at least one file handle.

    Most operating systems can change the open-files limit using the ulimit -n command. Example:

    However, this only changes the limit for the current shell session. Changing the limit on a system-wide, permanent basis varies more between systems.

    Linux

    On most Linux distributions, the total limit for open files is controlled by sysctl

    Tracing

    Real-time inspection

    Introduction

    When working with a system like VerneMQ sometimes when troubleshooting it would be nice to know what a client is actually sending and receiving and what VerneMQ is doing with this information. For this purpose VerneMQ has a built-in tracing mechanism which is safe to use in production settings as there is very little overhead in running the tracer and has built-in protection mechanisms to stop traces that produce too much information.

    docker run --name vernemq1 -d vernemq/vernemq

    Netdata

    Netdata Metrics

    A great way to monitor VerneMQ is to use Netdata or Netdata Cloud. Netdata uses VerneMQ in its Netdata Cloud service, and has developed full integration with VerneMQ.

    This means that you have one of the best monitoring tools ready for VerneMQ. Netdata will show you all the VerneMQ metrics in a realtime dashboard.

    When Netdata runs on the same node as VerneMQ it will automatically discover the VerneMQ node.

    Learn how to setup Netdata for VerneMQ with the following guide:

    https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/vernemq

    client(State, Meta): Get the client Id you set yourself during connection setup with the option {t, client, "client"}

  • connect(State, Meta, ConnectOpts): Connect to the broker with the options given in ConnectOpts

  • disconnect(State, Meta): Disconnect normally

  • subscribe(State, Meta, Topic, QoS): Subscribe to Topic with Quality of Service QoS

  • unsubscribe(State, Meta, Topic): Unsubscribe from Topic

  • publish(State, Meta, Topic, Payload, QoS): Publish a message with binary Payload to Topic with QoS

  • publish(State, Meta, Topic, Payload, QoS, RetainFlag): Publish a message with binary Payload to Topic with QoS and RetainFlag

  • MZBench installation guide
    BDL examples
    MZBench's guide
    here
    2. Install vmq_mzbench

    Actually, you don't even have to install vmq_mzbench, if you don't want to. Your scenario file will automatically fetch vmq_mzbench for any test you do. vmq_mzbench runs every test independently, so it has a provisioning step for any test, even if you only run it on a local worker.

    In case you still want to have `vmq_mzbench on your local machine, go through the following steps:

    To provision your tests from this local repository, you'll have to tell the scenario scripts to use rsync. Add this to the scenario file:

    If you'd just like the script itself fetch vmq_mzbench, then you can direct it to github:

    3. Write vmq_mzbench scenario files

    MZBench recently switched from an Erlang-styled Scenario DSL to a more python-like DSL dubbed BDL (Benchmark Definition Language). Have a look at the BDL examples on Github.

    You can familiarize yourself quickly with MZBench's guide on writing loadtest scenarios.

    There's not much to learn, just make sure you understand how pools and loops work. Then you can add the vmq_mzbench statement functions to the mix and define actual loadtest scenarios.

    Here's a list of the most important vmq_mzbench statement functions you can use in MQTT scenario files:

    • random_client_id(State, Meta, I): Create a random client Id of length I

    • fixed_client_id(State, Meta, Name, Id): Create a deterministic client Id with schema Name ++ "-" ++ Id

    • worker_id(State, Meta): Get the internal, sequential worker Id

    • client(State, Meta): Get the client Id you set yourself during connection setup with the option {t, client, "client"}

    • connect(State, Meta, ConnectOpts): Connect to the broker with the options given in ConnectOpts

    • disconnect(State, Meta): Disconnect normally

    • subscribe(State, Meta, Topic, QoS): Subscribe to Topic with Quality of Service QoS

    • subscribe_to_self(State, _Meta, TopicPrefix, Qos): Subscribe to an exclusive topic, for 1:1 testing

    • unsubscribe(State, Meta, Topic): Unubscribe from Topic

    • publish(State, Meta, Topic, Payload, QoS): Publish a message with binary Payload to Topic with QoS

    • publish(State, Meta, Topic, Payload, QoS, RetainFlag): Publish a message with binary Payload to Topic with QoS and RetainFlag

    • publish_to_self(State, Meta, TopicPrefix, Payload, Qos): -> Publish a payload to an exclusive topic, for 1:1 testing

    It's easy to add more statement functions to the MQTT worker if needed. For a full list of the exported statement functions, we encourage you to have a look at the MQTT worker code directly.

    vmq_mzbench tool
    MZBench system
    MZBench installation guide
    Tracing clients

    To trace a client the following command is available:

    See the available flags by calling vmq-admin trace client --help.

    A typical trace could look like the following:

    In this particular trace a trace was started for the client with client-id client. At first no clients are connected to the node where the trace has been started, but a little later the client connects and we see the trace come alive. The strange identifier <7616.3443.1> is called a process identifier and is the identifier of the process in which the trace happened - this isn't relevant unless one wants to correlate the trace with log entries where process identifiers are also logged. Besides the process identifier there are some lines with MQTT SEND and MQTT RECV which are to be understood from the perspective of the broker. In the above trace this means that first the broker receives a CONNECT frame and replies with a CONNACK frame. Each MQTT event is annotated with the data from the MQTT frame to give as much detail and insight as possible.

    Notice the auth_on_register call between CONNECT and CONNACK which is the authentication plugin hook being called to authenticate the client. In this case the hook returned ok which means the client was successfully authenticated.

    Likewise notice the auth_on_subscribe call between the SUBSCRIBE and SUBACK frames which is plugin hook used to authorize if this particular subscription should be allowed or not. In this case the subscription was authorized.

    Trace options

    The client trace command has additional options as shown by vmq-admin trace client --help. Those are hopefully self-explaining:

    A convenient tool is the ts (timestamp) tool which is available on many systems. If the trace output is piped to this command each line is prefixed with a timestamp:

    ts | sudo vmq-admin trace client client-id=tester

    It is currently not possible to start multiple traces from multiple shells, or trace multiple ClientIDs.

    Stopping a Trace from another shell

    If you loose access to your shell from where you started a trace, you might need to stop that trace before you can spawn a new one. Your attempt to spawn a second trace will result in the following output:

    You can stop a running trace using the stop_all command from a second shell. This will log a message to the other shell telling that session it's being externally terminated. The calling shell will silently return and be available for a new trace.

    sudo sysctl -w net.ipv4.tcp_rmem="4096 16384 32768"
    sudo sysctl -w net.ipv4.tcp_wmem="4096 16384 32768"
    
    # Nope, these values are not recommendations!
    # You really need to decide yourself.
    [{vmq_server, [
              {tcp_listen_options,
              [{sndbuf, 4096},
               {recbuf, 4096}]}]}].
    +P 256000
    -env ERL_MAX_ETS_TABLES 256000
    -env ERL_CRASH_DUMP /erl_crash.dump
    -env ERL_FULLSWEEP_AFTER 0
    -env ERL_MAX_PORTS 262144
    +A 64
    -setcookie vmq  # Important: Use your own private cookie... 
    -name [email protected]
    +K true
    +sbt db
    +sbwt very_long
    +swt very_low
    +sub true
    +Mulmbcs 32767
    +Mumbcgs 1
    +Musmbcs 2047
    # Nope, these values are not recommendations!
    # You really need to decide yourself, again ;)
    git clone git://github.com/vernemq/vmq_mzbench.git
    cd vmq_mzbench
    ./rebar get-deps
    ./rebar compile
    {make_install, [
    {rsync, "/path/to/your/installation/vmq_mzbench/"},
    {exclude, "deps"}]},
    {make_install, [
    {git, "git://github.com/vernemq/vmq_mzbench.git"}]},
    docker run -p 1883:1883 --name vernemq1 -d vernemq/vernemq
    docker run -e "DOCKER_VERNEMQ_ALLOW_ANONYMOUS=on" --name vernemq1 -d vernemq/vernemq
    docker run -e "DOCKER_VERNEMQ_DISCOVERY_NODE=<IP-OF-VERNEMQ1>" --name vernemq2 -d vernemq/vernemq
    docker exec vernemq1 vmq-admin cluster show
    +--------------------+-------+
    |        Node        |Running|
    +--------------------+-------+
    |[email protected]| true  |
    |[email protected]| true  |
    +--------------------+-------+
    git clone git://github.com/vernemq/vmq_mzbench.git
    cd vmq_mzbench
    ./rebar get-deps
    ./rebar compile
    {make_install, [
    {rsync, "/path/to/your/installation/vmq_mzbench/"},
    {exclude, "deps"}]},
    {make_install, [
    {git, "git://github.com/vernemq/vmq_mzbench.git"}]},
    vmq-admin trace client client-id=<client-id>
    $ vmq-admin trace client client-id=client
    No sessions found for client "client"
    New session with PID <7616.3443.1> found for client "client"
    <7616.3443.1> MQTT RECV: CID: "client" CONNECT(c: client, v: 4, u: username, p: password, cs: 1, ka: 30)
    <7616.3443.1> Calling auth_on_register({{172,17,0,1},34274},{[],<<"client">>},username,password,true) 
    <7616.3443.1> Hook returned "ok"
    <7616.3443.1> MQTT SEND: CID: "client" CONNACK(sp: 0, rc: 0)
    <7616.3443.1> MQTT RECV: CID: "client" SUBSCRIBE(m1) with topics:
        q:0, t: "topic"
    <7616.3443.1> Calling auth_on_subscribe(username,{[],<<"client">>}) with topics:
        q:0, t: "topic"
    <7616.3443.1> Hook returned "ok"
    <7616.3443.1> MQTT SEND: CID: "client" SUBACK(m1, qt[0])
    <7616.3443.1> Trace session for client stopped
    Options
    
      --mountpoint=<Mountpoint>
          the mountpoint for the client to trace.
          Defaults to "" which is the default mountpoint.
      --rate-max=<RateMaxMessages>
          the maximum number of messages for the given interval,
          defaults to 10.
      --rate-interval=<RateIntervalMS>
          the interval in milliseconds over which the max number of messages
          is allowed. Defaults to 100.
      --trunc-payload=<SizeInBytes>
          control when the payload should be truncated for display.
          Defaults to 200.
    Cannot start trace as another trace is already running.
    $ sudo vmq-admin trace stop_all
    .

    As seen above, it is generally set high enough for VerneMQ. If you have other things running on the system, you might want to consult the sysctl manpage manpage for how to change that setting. However, what most needs to be changed is the per-user open files limit. This requires editing /etc/security/limits.conf, for which you'll need superuser access. If you installed VerneMQ from a binary package, add lines for the vernemq user like so, substituting your desired hard and soft limits:

    On Ubuntu, if you’re always relying on the init scripts to start VerneMQ, you can create the file /etc/default/vernemq and specify a manual limit like so:

    This file is automatically sourced from the init script, and the VerneMQ process started by it will properly inherit this setting. As init scripts are always run as the root user, there’s no need to specifically set limits in /etc/security/limits.conf if you’re solely relying on init scripts.

    On CentOS/RedHat systems, make sure to set a proper limit for the user you’re usually logging in with to do any kind of work on the machine, including managing VerneMQ. On CentOS, sudo properly inherits the values from the executing user.

    Systemd

    Systemd allows you to set the open file limit. The LimitNOFILE parameter defines the maximum number of file descriptors that a service or system unit can open. In the past, "infinite" was often chosen, which actually means an OS/systemD dependent maximum number. However, in recent versions of systemd like RHEL 9, CentOS Stream 9, and others, the default value is set to around a billion, significantly higher than necessary and the defaults used in older distributions. It is advisable to set a reasonable default value for LimitNOFILE based on the specific use case. Please consult https://access.redhat.com/solutions/1479623 for more information (RHEL9).

    Enable PAM-Based Limits for Debian & Ubuntu

    It can be helpful to enable PAM user limits so that non-root users, such as the vernemq user, may specify a higher value for maximum open files. For example, follow these steps to enable PAM user limits and set the soft and hard values for all users of the system to allow for up to 65536 open files.

    Edit /etc/pam.d/common-session and append the following line:

    If /etc/pam.d/common-session-noninteractive exists, append the same line as above.

    Save and close the file.

    Edit /etc/security/limits.conf and append the following lines to the file:

    1. Save and close the file.

    2. (optional) If you will be accessing the VerneMQ nodes via secure shell (ssh), you should also edit /etc/ssh/sshd_config and uncomment the following line:

    and set its value to yes as shown here:

    1. Restart the machine so that the limits to take effect and verify

      that the new limits are set with the following command:

    Enable PAM-Based Limits for CentOS and Red Hat

    1. Edit /etc/security/limits.conf and append the following lines to

      the file:

    1. Save and close the file.

    2. Restart the machine so that the limits to take effect and verify that the new limits are set with the following command:

    In the above examples, the open files limit is raised for all users of the system. If you prefer, the limit can be specified for the vernemq user only by substituting the two asterisks (*) in the examples with vernemq.

    Solaris

    In Solaris 8, there is a default limit of 1024 file descriptors per process. In Solaris 9, the default limit was raised to 65536. To increase the per-process limit on Solaris, add the following line to /etc/system:

    Reference:

    Mac OS X

    To check the current limits on your Mac OS X system, run:

    The last two columns are the soft and hard limits, respectively.

    To adjust the maximum open file limits in OS X 10.7 (Lion) or newer, edit /etc/launchd.conf and increase the limits for both values as appropriate.

    For example, to set the soft limit to 16384 files, and the hard limit to 32768 files, perform the following steps:

    1. Verify current limits:

      The response output should look something like this:

    2. Edit (or create) /etc/launchd.conf and increase the limits. Add lines that look like the following (using values appropriate to your environment):

    3. Save the file, and restart the system for the new limits to take effect. After restarting, verify the new limits with the launchctl limit command:

      The response output should look something like this:

    Attributions

    This work, "Open File Limits", is a derivative of Open File Limits by Riak, used under Creative Commons Attribution 3.0 Unported License. "Open File Limits" is licensed under Creative Commons Attribution 3.0 Unported License by Erlio GmbH.

    Plugins

    Managing VerneMQ Plugins

    Many aspects of VerneMQ can be extended using plugins. The standard VerneMQ package comes with several official plugins. You can show the enabled & running plugins via:

    The command above displays all the enabled plugins together with the hooks they implement:

    The table will show the following information:

    • name of the plugin

    • type (application or single module)

    • all the hooks implemented in the plugin

    • the exact module and function names (M:F/A) implementing those hooks.

    As an example on how to read the table: the vmq_passwd:auth_on_register/5 function is the actual implementation of the auth_on_register hook in the vmq_passwd application plugin.

    In addition, you can conclude that the plugin is currently running, as it shows up in the table.

    To display information on internal plugins, add the --internal flag. The table below shows you that the generic metadata application and the generic message store are actually internal plugins.

    Enable a plugin

    This enables the ACL plugin. Because the vmq_acl plugin is already started the above command won't succeed. In case the plugin sits in an external directory you must also to provide the --path=PathToPlugin.

    Disable a plugin

    Persisting Plugin Configurations and Starts

    To make a plugin start when VerneMQ boots, you need to tell VerneMQ in the main vernemq.conf file.

    The general syntax to enable a plugin is to add a line like plugins.pluginname = on. Using the vmq_passwd plugin as an example:

    If the plugin is external (all your own VerneMQ plugin will be of this category), the path can be specified like this:

    Plugin specific settings can be configured via myplugin.somesetting = value, like:

    Check the vernemq.conf file for additional details and examples.

    Introduction

    Everything you must know to properly configure and deploy a VerneMQ Cluster

    VerneMQ can be easily clustered. Clients can then connect to any cluster node and receive messages from any other cluster nodes. However, the MQTT specification gives certain guarantees that are hard to fulfill in a distributed environment, especially when network partitions occur. We'll discuss the way VerneMQ deals with network partitions in its own subsection

    Set the Cookie! All cluster nodes need to be configured to use the same Cookie value. It can be set in the vernemq.conf with the distributed_cookie setting. Set the Cookie to a private value for security reasons!

    For a successful VerneMQ cluster setup, it is important to choose proper VerneMQ node names. In vernemq.conf change the nodename = [email protected] to something appropriate. Make sure that the node names are unique within the cluster. Read the section on if firewalls are involved.

    A note on statefulness

    Before you go ahead and experience the full power of clustering VerneMQ, be aware of its stateful character. An MQTT broker is a stateful application and a VerneMQ cluster is a stateful cluster.

    What does this mean in detail? It means that clustered VerneMQ nodes will share information about connected clients and sessions but also meta-information about the cluster itself.

    For instance, if you stop a cluster node, the VerneMQ cluster will not just forget about it. It will know that there's a node missing and it will keep looking for it. It will know there's a netsplit situation and it will heal the partition when the node comes back up. But if the missing node never comes back there's an eternal netsplit. (still resolvable by making the missing node explicitly leave).

    This doesn't mean that a VerneMQ cluster cannot dynamically grow and shrink. But it means you have to tell the cluster what you intend to do, by using join and leave commands.

    If you want a cluster node to leave the cluster, well... use the vmq-admin cluster leave command. If you want a node to join a cluster use the vmq-admin cluster join command.

    Makes sense? Go ahead and create your first VerneMQ cluster!

    Joining a Cluster

    The discovery-node can be any other node. It is not necessary to always choose the same node as discovery node. It is important that only a node with an empty history joins a cluster. One should not try to add a node, that had already traffic on it, to a cluster.

    Leaving a Cluster

    Detailed Cluster Leave, Case A: Make a live node leave

    A cluster leave will actually do a lot more work, and gives you some options to choose. The node leaving the cluster will go to great length trying to migrate its existing queues to other nodes. As queues (online or offline) are live processes in a VerneMQ node, it will only exit after it has migrated them.

    Let's look at the steps in detail:

    1. vmq-admin cluster leave node=<NodeThatShouldGo>

    This first step will only stop the MQTT Listeners of the node to ensure that no new connections are accepted. It will not interrupt the existing connections, and behind the scenes the node will not leave the cluster yet. Existing clients are still able to publish and receive messages at this point.

    The idea is to give a grace period with the hope that existing clients might re-connect (to another node). If you have decided that this period is over (after 5 minutes or 1 day is up to you), you proceed with step 2: disconnecting the rest of the clients.

    1. vmq-admin cluster leave node=<NodeThatShouldGo> -k

    The -k flag will delete the MQTT Listeners of the leaving node, taking down all live connections. If this is what you want from the beginning, you can do this right away as a first step.

    Now, queue migration is triggered by clients re-connecting to other nodes. They will claim their queue and it will get migrated. Still, there might be some offline queues remaining on the leaving node, because they were pre-existing or because some clients do not re-connect and do not reclaim their queues.

    VerneMQ will throw an exception if there are remaining offline queues after a configurable timeout. The default is 60 seconds, but you can set it as an option to the cluster leave command. As soon as the exception shows in console or console.log, you can actually retry the cluster leave command (including setting a migration timeout (-t), and an interval in seconds (-i) indicating how often information on the migration progress should be printed to the console.log):

    1. vmq-admin cluster leave node=<NodeThatShouldGo> -k -i 5 -t 120

    After this timeout VerneMQ will forcefully migrate the remaining offline queues to other cluster nodes in a round robin manner. After doing that, it will stop the leaving VerneMQ node.

    Note 1: While doing a cluster leave, it's a good idea to tail -f the VerneMQ console.log to see queue migration progress.

    Note 2: A node that has left the cluster is considered dead. If you want to reuse that node as a single node broker, you have to (backup & rename &) delete the whole VerneMQdata directory and start with a new directory. (It will be created automatically by VerneMQ at boot).

    Otherwise that node will start looking for its old cluster peers when you restart it.

    Detailed Cluster Leave, Case B: Make a stopped node leave

    So, case A was the happy case. You left the cluster with your node in a controlled manner, and everything worked, including a complete queue (and message) transfer to other nodes.

    Let's look at the second possibility where the node is already down. Your cluster is still counting on it though and possibly blocking new subscription for that reason, so you want to make the node leave.

    To do this, use the same command(s) as in the first case. There is one important consequence to note: by making a stopped node leave, you basically throw away persistant queue content, as VerneMQ won't be able to migrate or deliver it.

    Let's repeat that to make sure:

    Case B: Currently the persisted QoS 1 & QoS 2 messages aren't replicated to the other nodes by the default message store backend. Currently you will lose the offline messages stored on the leaving node.

    Getting Cluster Status Information

    Introduction

    Learn how to implement VerneMQ Plugins for customizing many aspects of how VerneMQ deals with client connections, subscriptions, and message flows.

    VerneMQ is implemented in Erlang OTP and therefore runs on top of the Erlang VM. For this reason native plugins have to be developed in a programming language that runs on the Erlang VM. The most popular choice is obviously the Erlang programming language itself, but Elixir or Lisp flavoured Erlang LFE could be used too. That said, all the plugin hooks are also exposed over (a subset of) Lua, and over WebHooks. This allows you to implement a VerneMQ plugin, by simply implementing a WebHook endpoint, using any programming language you like. You can also implement a VerneMQ plugin as a Lua script.

    Be aware that in VerneMQ a plugin does NOT run in a sandboxed environment and misbehaviour could seriously harm the system (e.g. performance degradation, reduced availability as well as consistency, and message loss). Get in touch with us in case you require a review of your plugin.

    This guide explains the different flows that expose different hooks to be used for custom plugins. It also describes the code structure a plugin must comply to in order to be successfully loaded and started by the VerneMQ plugin mechanism.

    All the hooks that are currently exposed fall into one of three categories.

    1. Hooks that allow you to change the protocol flow. An example could be to authenticate a client using the auth_on_register hook.

    2. Hooks that inform you about a certain action, that could be used for example to implement a custom logging or audit plugin.

    3. Hooks that are called given a certain condition

    Notice that some hooks come in two variants, for example the auth_on_register and then auth_on_register_m5 hooks. The _m5 postfix refers to the fact that this hook is only invoked in an MQTT 5.0 session context whereas the other is invoked in a MQTT 3.1/3.1.1 session context.

    Before going into the details, let's give a quick intro to the VerneMQ plugin system.

    Plugin System

    The VerneMQ plugin system allows you to load, unload, start and stop plugins during runtime, and you can even upgrade a plugin during runtime. To make this work it is required that the plugin is an OTP application and strictly follows the rules of implementing the Erlang OTP application behaviour. It is recommended to use the rebar3 toolchain to compile the plugin. VerneMQ comes with built-in support for the directory structure used by rebar3.

    Every plugin has to describe the hooks it is implementing as part of its application environment file. The vmq_acl plugin for instance comes with the application environment file below:

    Lines 6 to 10 instruct the plugin system to ensure that those dependent applications are loaded and started. If you're using third party dependencies make sure that they are available in compiled form and part of the plugin load path. Lines 16 to 20 allow the plugin system to compile the plugin rules. Yes, you've heard correctly. The rules are compiled into Erlang VM code to make sure the lookup and execution of plugin code is as fast as possible. Some hooks exist which are used internally such as the change_config/1, we'll describe those at some other point.

    The environment value for vmq_plugin_hooks is a list of hooks. A hook is specified by {Module, Function, Arity, Options}.

    To streamline the plugin development we provide a different Erlang behaviour for every hook a plugin implements. Those behaviours are part of the vernemq_dev library application, which you should add as a dependency to your plugin. vernemq_dev also comes with a header file that contains all the type definitions used by the hooks.

    Chaining

    It is possible to have multiple plugins serving the same hook. Depending on the hook the plugin chain is used differently. The most elaborate chains can be found for the hooks that deal with authentication and authorization flows. We also call them conditional chains as a plugin can give control away to the next plugin in the chain. The image show a sample plugin chain for the auth_on_register hook.

    Most hooks don't require conditions and are mainly used as event handlers. In this case all plugins in a chain are called. An example for such a hook would be the on_register hook.

    A rather specific case is the need to call only one plugin instead of iterating through the whole chain. VerneMQ uses such hooks for it's pluggable message storage system.

    Unless you're implementing your custom message storage backend, you probably won't need this style of hook.

    The position in the plugin call chain is currently implicitly given by the order the plugins have been started.

    Startup

    The plugin mechanism uses the application environment file to infer the applications that it has to load and start prior to starting the plugin itself. It internally uses the application:ensure_all_started/1 function call to start the plugin. If your setup is more complex you could override this behaviour by implementing a custom start/0 function inside a module that's named after your plugin.

    Teardown

    The plugin mechanism uses application:stop/1 to stop and unload the plugin. This won't stop the dependent application started at startup. If you rely on third party applications that aren't started as part of the VerneMQ release, e.g. a database driver, you can implement a custom stop/0 function inside a module that's named after your plugin and properly stop the driver there.

    Public Type Specs

    The vmq_types.hrl exposes all the type specs used by the hooks. The following types are used by the plugin system:

    Live reconfiguration

    Managing VerneMQ live config values.

    You can dynamically re-configure most of VerneMQ's settings on a running node by using the vmq-admin set command.

    The following config values can be handled dynamically:

    Settings dynamically configured with the vmq-admin set command will be reset by vernemq.conf upon broker restart.

    Setting a value for the local node

    Let's change the max_client_id_size as an example. (We might have noticed that some clients can't login because their client ID is too long, but we don't want to restart the broker for that). Note that you can also set multiple values with the same command.

    Setting a value for an arbitrary cluster node

    Setting a value for all cluster nodes

    Show current VerneMQ config values

    For the local node

    You can show one or multiple values in a simple table:

    For an arbitrary node

    For all cluster nodes

    MQTT Bridge

    VerneMQ can interface with other brokers (and itself) via MQTT bridges.

    Bridges are a non-standard way (but de-facto standard) among MQTT broker implementations to connect two different MQTT brokers. Over a bridge, the topic tree of a remote broker becomes part of the topic tree on the local broker. VerneMQ bridges support plain TCP connections as well as SSL connections.

    A bridge will be a point-to-point connection between 2 brokers, but can still forward all the messages from all cluster nodes to another cluster.

    The VerneMQ bridge plugin currently forwards messages using MQTT protocol version 3.1.1. MQTT v5 messages will still be forwarded but be aware that metadata like user-defined properties will be dropped.

    Enabling the bridge functionality

    The MQTT bridge plugin (vmq_bridge) is distributed with VerneMQ as an integrated plugin but is not enabled by default. After configuring the bridge as described below, make sure to enable the plugin by setting (vernemq.conf):

    See for more information on working with plugins.

    Basic information on the configured bridges can be displayed on the admin CLI:

    The vmq-admin bridge command is only available when the bridge plugin is running.

    Sample MQTT Bridge

    To configure vmq_bridge you need to edit the bridge section of the vernemq.conf file to set endpoints and mapping topics. A bridge can push or pull messages, as defined in the topic pattern list.

    Setup a bridge to a remote broker:

    Different connection parameters can be set:

    Define the topics the bridge should incorporate in its local topic tree (by subscribing to the remote), or the topics it should export to the remote broker (by publishing to the remote). We share a similar configuration syntax to that used by the Mosquitto broker:

    topic defines a topic pattern that is shared between the two brokers. Any topics matching the pattern (which may include wildcards) are shared. The second parameter defines the direction that the messages will be shared in, so it is possible to import messages from a remote broker using in, export messages to a remote broker using out or share messages in both directions. If this parameter is not defined, VerneMQ defaults to out. The QoS level defines the publish/subscribe QoS level used for this topic and defaults to 0. (Source: mosquitto.conf)

    The local-prefix and remote-prefix can be used to prefix incoming or outgoing publish messages.

    Currently the # wildcard is treated as a comment from the configuration parser, please use * instead.

    A simple example:

    Sample MQTT Bridge that uses SSL/TLS

    SSL bridges support the same configuration parameters as TCP bridges (change .tcp to .ssl), but need further instructions for handling the SSL specifics:

    Restarting MQTT Bridges

    MQTT Bridges that are initiated from the source broker (push bridges) are started when VerneMQ boots and finds a bridge configuration in the vernemq.conf file. Sometimes it's useful to restart MQTT bridges without restarting a broker. This can be done by disabling, then enabling the vmq_bridge plugin and manually calling the bridge start command:

    A typical VerneMQ deployment

    In the following we describe how a typical VerneMQ deployment can look and some of the concerns one have to take into account when designing such a system.

    A typical VerneMQ deployment could from a high level look like the following:

    In this scenario MQTT clients connect from the internet and are authenticated and authorized against the Authentication Management Service and publish and receive messages, either with each other or with the Backend-Services which might be responsible for sending control messages to the clients or storing and forwarding messages to other systems or databases for later processing.

    To build and deploy a system such as the above a lot of decisions has to be made. These can concern how to do authentication and authorization, where to do TLS termination, how the load balancer should be configured (if one is needed at all), what the MQTT architecture and topic trees should look and how and to what level the system can/should scale. To simplify the following discussion we'll set a few requirements:

    Migrating to 2.0

    Release 2.0.0 has a small number of minor incompatibilities:

    Error Logger

    VerneMQ now uses the internal logger library instead of the lager library. It's best for your custom VerneMQ plugins to do the same and replace the lager log calls with internal log statements. Instead of using lager:error/2, you can use the following format:

    To use the Logger Macros, add this include line to your module: -include_lib("kernel/include/logger.hrl").

    launchctl limit
    cpu         unlimited      unlimited
    filesize    unlimited      unlimited
    data        unlimited      unlimited
    stack       8388608        67104768
    core        0              unlimited
    rss         unlimited      unlimited
    memlock     unlimited      unlimited
    maxproc     709            1064
    maxfiles    10240          10240
    limit maxfiles 16384 32768
    ulimit -n 65536
    sysctl fs.file-max
    fs.file-max = 50384
    vernemq soft nofile 4096
    vernemq hard nofile 65536
    ulimit -n 65536
    session    required   pam_limits.so
    *               soft     nofile          65536
    *               hard     nofile          65536
    #UseLogin no
    UseLogin yes
    ulimit -a
    *               soft     nofile          65536
    *               hard     nofile          65536
    ulimit -a
    set rlim_fd_max=65536
    launchctl limit maxfiles
    vmq-admin plugin show
    +-----------+-----------+-----------------+-----------------------------+
    |  Plugin   |   Type    |     Hook(s)     |            M:F/A            |
    +-----------+-----------+-----------------+-----------------------------+
    |vmq_passwd |application|auth_on_register |vmq_passwd:auth_on_register/5|
    |  vmq_acl  |application| auth_on_publish |  vmq_acl:auth_on_publish/6  |
    |           |           |auth_on_subscribe| vmq_acl:auth_on_subscribe/3 |
    +-----------+-----------+-----------------+-----------------------------+
    allow_anonymous
    topic_alias_max_broker
    receive_max_broker
    vmq_acl.acl_file
    graphite_host
    vmq_acl.acl_reload_interval
    graphite_enabled
    queue_type
    suppress_lwt_on_session_takeover
    max_message_size
    vmq_passwd.password_file
    graphite_port
    max_client_id_size
    upgrade_outgoing_qos
    max_message_rate
    graphite_interval
    allow_multiple_sessions
    systree_enabled
    max_last_will_delay
    retry_interval
    receive_max_client
    max_offline_messages
    max_online_messages
    max_inflight_messages
    allow_register_during_netsplit
    vmq_passwd.password_reload_interval
    topic_alias_max_client
    systree_interval
    allow_publish_during_netsplit
    coordinate_registrations
    remote_enqueue_timeout
    persistent_client_expiration
    allow_unsubscribe_during_netsplit
    graphite_include_labels
    shared_subscription_policy
    queue_deliver_mode
    allow_subscribe_during_netsplit
    VerneMQ Inter-node Communication
    Managing plugins
    Clients connecting from the internet are using TLS client certificates
  • The messaging pattern is largely fan-in: The clients continuously publish a lot of messages to a set of topics which have to be handled by the Backend-Services.

  • The client sessions are persistent, which means the broker will store QoS 1 & 2 messages routed to the clients while the clients are offline.

  • In the following we'll cover some of these options and concerns.

    Load Balancers and the PROXY Protocol

    Often a load balancer is deployed between MQTT clients and the VerneMQ cluster. One of the main purposes of the load balancer is to ensure that client connections are distributed between the VerneMQ nodes so each node has the same amount of connections. Usually a load balancer provides different load balancing strategies for deciding how to select the node where it should route an incoming connection. Examples of these are random, source hashing (based on source IP) or even protocol-aware balancing based on for example the MQTT client-id. The last two are examples of sticky balancing or session affine strategies where a client will always be routed to the same cluster node as long as the source IP or client-id remains the same.

    When using a load balancer the client is no longer directly connected to the VerneMQ nodes and therefore the peer port and IP-address VerneMQ sees is therefore not that of the client, but of the load balancer. The peer information is often important for logging reasons or if a plugin checks it up against a white/black list.

    To solve this problem VerneMQ supports the PROXY Protocol v1 and v2 which is designed to transport connection information across proxies. See here how to enable the proxy protocol for an MQTT listener. In case TLS is terminated at the load balancer and client certificates are used PROXY Protocol (v2) will also take care of forwarding TLS client certificate details.

    Client certificates and authentication

    Often if client certificates are used to verify and authenticate the clients. VerneMQ makes it possible to make the client certificate common name (CN) available for the authentication plugin system by overriding the MQTT username with the CN, before authentication is performed. If TLS is terminated at the load balancer then the PROXY Protocol would be used. This works for both if TLS is terminated in a load balancer or if TLS is terminated directly in VerneMQ. In case TLS is terminated at the load balancer then the listener can be configured as follows to achieve this effect:

    If TLS is terminated directly in VerneMQ the PROXY protocol isn't needed as the TLS client certificate is directly available in VerneMQ and the CN can be used to instead of the username by setting:

    See the details in the MQTT listener section.

    The actual authentication can then be handled by an authentication and authorization plugin like vmq_diversity which supports PostgreSQL, CockroachDB, MongoDB, Redis and MySQL as backends for storing credentials and ACL rules.

    Monitoring and Alerting

    Another important aspect of running a VerneMQ is having proper monitoring and alerting in place. All the usual things should be monitored at the OS level such as memory and cpu usage and alerts should be put in place to actions can be taken if a disk is filling up or a VerneMQ node is starting to use too much CPU. VerneMQ exports a large number of metrics and depending on the use case these can be used as important indicators that the system is running

    Performance considerations

    When designing a system like the one described here, there are a number of things to consider in order to get the best performance out of the available resources.

    Lower load through session affine load balancing

    As mentioned earlier clients in this scenario are using persistent sessions. In VerneMQ a persistent session exists only on the VerneMQ node where the client connected. This implies that if the client using a persistent session later reconnects to another node, then the session, including any offline messages, will be moved to the new node. This has a certain overhead and can be avoided if the load balancer in front of VerneMQ is using a session affine load balancing strategy such as IP source hashing to assign the client connecting to a node. Of course this strategy isn't perfect if clients often change their IP addresses, but for most cases it is a huge improvement over a random load balancing strategy.

    Handling large fan-ins

    In many systems the MQTT clients provide a lot of data by periodically broadcasting data to the MQTT cluster. The amount of published messages can very easily become hard to manage for a single MQTT client. Further using normal MQTT subscriptions all subscribers would receive the same messages, so adding more subscribers to a topic doesn't help handling the amount of messages. To solve this VerneMQ implements a concept called shared subscriptions which makes it possible to distribute MQTT messages published to a topic over several MQTT clients. In this specific scenario this would mean the Backend-Services would consist of a set of clients subscribing to cluster nodes using shared subscriptions.

    To avoid expensive intra-node communication, VerneMQ shared subscriptions support a policy called local_only which means that messages being will be delivered to shared subscribers on the local node only and not forwarded to shared subscribers on other nodes in the cluster. With this policy messages for the backend-services can be delivered in the fastest and most expedient manner with the lowest overhead. See the shared subscriptions documentation for more information.

    Tuning buffer sizes

    Controlling TCP buffer sizes is important in ensuring optimal memory usage. The rule is that the more bandwidth or the lower latency required, the larger the TCP buffer sizes should be. Many IoT communicate with a very low bandwidth and as such the server side TCP buffer sizes for these does not need to be very large. On the other hand, in this scenario the consumers handling the fan-ins in the Balanced-Services will have many (thousands or tens of thousands of messages per second) and they can benefit from larger TCP buffer sizes. Read more about tuning TCP buffer sizes here.

    Protecting from overload

    An important guideline in protecting a VerneMQ cluster from overload is to allow only what is necessary. This means having and enforcing sensible authentication and authorization rules as well as configuring conservatively so resources cannot be exhausted due to human error or MQTT clients that have turned malicious. For example in VerneMQ it is possible to specify how many offline messages a persistent session can maximally hold via the max_offline_messages setting - and it should then be set to the lowest acceptable value which works for all clients and/or use a plugin which is able to override such settings on a per-client basis. The load balancer can also play an important role in protecting the system in that it can control the connect rates as well as imposing bandwidth restrictions on clients.

    Deploying a VerneMQ cluster

    Somehow a system like this has to be deployed. How to do this will not be covered here, bit it is certainly possible to deploy VerneMQ using tools like Ansible, Chef or Puppet or use container solutions such as Kubernetes. For more information on how to deploy VerneMQ on Kubernetes check out our guide: VerneMQ on Kubernetes.

    Removed Features
    • The multiple sessions feature has been fully removed. (you are likely not affected by this)

    • Compatibility to and old (v0.15) subscriber format was removed. (you are likely not affected by this)

    on_deliver hook

    The on_deliver hook now has a Properties argument like the on_deliver_m5 hook. This changes the function arity from on_deliver/6 to on_deliver/7. You can ignore the Properties argument in your on_deliver hook implementation, but you'll have to adapt the function definition, by adding a variable similar to:

    Credentials obfuscation

    VerneMQ now uses internal credentials obfuscation, using the following library: https://github.com/rabbitmq/credentials-obfuscation/. This avoids passwords in stacktraces and/or logs. Your own authentication plugins might need adaptation since you want to de-encrypt the password "at the last moment". You can check examples of how the internal VerneMQ auth plugins were adapted to make acredentials_obfuscation:decrypt(Password) call to check for a potentially encrypted password before given it to the database to check.

    General note

    Some settings related to logging were adapted a bit, and there are additional settings exposed in the vernemq.conf file. The Linux package installer gives you the choice to use an existing vernemq.conf file, or start with a new template. Depending on the number of settings you have changed, it might be easiest to to move and safe your old vernemq.conf, and then use the new template to re-add your settings.

    launchctl limit
    cpu         unlimited      unlimited
    filesize    unlimited      unlimited
    data        unlimited      unlimited
    stack       8388608        67104768
    core        0              unlimited
    rss         unlimited      unlimited
    memlock     unlimited      unlimited
    maxproc     709            1064
    maxfiles    16384          32768
    $ sudo vmq-admin plugin show --internal
    +-----------------------+-------------+-------------------------------+------------------------------------------------+
    | Plugin                | Type        | Hook(s)                       | M:F/A                                          |
    +-----------------------+-------------+-------------------------------+------------------------------------------------+
    | vmq_swc               | application | metadata_put                  | vmq_swc_plugin:metadata_put/3                  |
    |                       |             | metadata_get                  | vmq_swc_plugin:metadata_get/2                  |
    |                       |             | metadata_delete               | vmq_swc_plugin:metadata_delete/2               |
    |                       |             | metadata_fold                 | vmq_swc_plugin:metadata_fold/3                 |
    |                       |             | metadata_subscribe            | vmq_swc_plugin:metadata_subscribe/1            |
    |                       |             | cluster_join                  | vmq_swc_plugin:cluster_join/1                  |
    |                       |             | cluster_leave                 | vmq_swc_plugin:cluster_leave/1                 |
    |                       |             | cluster_members               | vmq_swc_plugin:cluster_members/0               |
    |                       |             | cluster_rename_member         | vmq_swc_plugin:cluster_rename_member/2         |
    |                       |             | cluster_events_add_handler    | vmq_swc_plugin:cluster_events_add_handler/2    |
    |                       |             | cluster_events_delete_handler | vmq_swc_plugin:cluster_events_delete_handler/2 |
    |                       |             | cluster_events_call_handler   | vmq_swc_plugin:cluster_events_call_handler/3   |
    |                       |             |                               |                                                |
    +-----------------------+-------------+-------------------------------+------------------------------------------------+
    | vmq_generic_msg_store | application | msg_store_write               | vmq_generic_msg_store:msg_store_write/2        |
    |                       |             | msg_store_delete              | vmq_generic_msg_store:msg_store_delete/2       |
    |                       |             | msg_store_find                | vmq_generic_msg_store:msg_store_find/2         |
    |                       |             | msg_store_read                | vmq_generic_msg_store:msg_store_read/2         |
    |                       |             |                               |                                                |
    +-----------------------+-------------+-------------------------------+------------------------------------------------+
    | vmq_config            | module      | change_config                 | vmq_config:change_config/1                     |
    |                       |             |                               |                                                |
    +-----------------------+-------------+-------------------------------+------------------------------------------------+
    | vmq_acl               | application | change_config                 | vmq_acl:change_config/1                        |
    |                       |             | auth_on_publish               | vmq_acl:auth_on_publish/6                      |
    |                       |             | auth_on_subscribe             | vmq_acl:auth_on_subscribe/3                    |
    |                       |             | auth_on_publish_m5            | vmq_acl:auth_on_publish_m5/7                   |
    |                       |             | auth_on_subscribe_m5          | vmq_acl:auth_on_subscribe_m5/4                 |
    |                       |             |                               |                                                |
    +-----------------------+-------------+-------------------------------+------------------------------------------------+
    | vmq_passwd            | application | change_config                 | vmq_passwd:change_config/1                     |
    |                       |             | auth_on_register              | vmq_passwd:auth_on_register/5                  |
    |                       |             | auth_on_register_m5           | vmq_passwd:auth_on_register_m5/6               |
    |                       |             |                               |                                                |
    +-----------------------+-------------+-------------------------------+------------------------------------------------+
    vmq-admin plugin enable --name=vmq_acl
    vmq-admin plugin disable --name=vmq_acl
    plugins.vmq_passwd = on
    plugins.myplugin = on
    plugins.myplugin.path = /path/to/plugin
    vmq_passwd.password_file = ./etc/vmq.passwd
    vmq-admin cluster join discovery-node=<OtherClusterNode>
    vmq-admin cluster leave node=<NodeThatShouldGo> (only the first step!)
    vmq-admin cluster show
    {application, vmq_acl,
     [
      {description, "Simple File based ACL for VerneMQ"},
      {vsn, git},
      {registered, []},
      {applications, [
                      kernel,
                      stdlib,
                      clique
                     ]},
      {mod, { vmq_acl_app, []}},
      {env, [
          {file, "priv/test.acl"},
          {interval, 10},
          {vmq_config_enabled, true},
          {vmq_plugin_hooks, [
                {vmq_acl, change_config, 1, [internal]},
                {vmq_acl, auth_on_publish, 6, []},
                {vmq_acl, auth_on_subscribe, 3, []}
            ]}
        ]}
     ]}.
    -type peer()                :: {inet:ip_address(), inet:port_number()}.
    -type username()            :: binary() | undefined.
    -type password()            :: binary() | undefined.
    -type client_id()           :: binary().
    -type mountpoint()          :: string().
    -type subscriber_id()       :: {mountpoint(), client_id()}.
    -type reg_view()            :: atom().
    -type topic()               :: [binary()].
    -type qos()                 :: 0 | 1 | 2.
    -type routing_key()         :: [binary()].
    -type payload()             :: binary().
    -type flag()                :: boolean().
    vmq-admin set max_client_id_size=45
    vmq-admin set max_client_id_size=45 [email protected]
    vmq-admin set max_client_id_size=45 --all
    vmq-admin show max_client_id_size retry_interval
    +----------------------+------------------+--------------+
    |         node         |max_client_id_size|retry_interval|
    +----------------------+------------------+--------------+
    |[email protected]|        28        |      20      |
    +----------------------+------------------+--------------+
    
    `
    vmq-admin show max_client_id_size retry_interval --node [email protected]
    vmq-admin show max_client_id_size retry_interval --all
    +----------------------+------------------+--------------+
    |         node         |max_client_id_size|retry_interval|
    +----------------------+------------------+--------------+
    |[email protected]|        33        |      20      |
    |[email protected]|        33        |      20      |
    |[email protected]|        33        |      20      |
    |[email protected]|        33        |      20      |
    |[email protected]|        28        |      20      |
    +----------------------+------------------+--------------+
    plugins.vmq_bridge = on
    $ vmq-admin bridge show
    +-----------------+-----------+----------+-------------------+
    |   endpoint      |buffer size|buffer max|buffer dropped msgs|
    +-----------------+-----------+----------+-------------------+
    |192.168.1.10:1883|     0     |    0     |         0         |
    +-----------------+-----------+----------+-------------------+
    vmq_bridge.tcp.br0 = 192.168.1.12:1883
    # use a clean session (defaults to 'off')
    vmq_bridge.tcp.br0.cleansession = off | on
    
    # set the client id (defaults to 'auto', which generates one)
    vmq_bridge.tcp.br0.client_id = auto | my_bridge_client_id
    
    # set keepalive interval (defaults to 60 seconds)
    vmq_bridge.tcp.br0.keepalive_interval = 60
    
    # set the username and password for the bridge connection
    vmq_bridge.tcp.br0.username = my_bridge_user
    vmq_bridge.tcp.br0.password = my_bridge_pwd
    
    # set the restart timeout (defaults to 10 seconds)
    vmq_bridge.tcp.br0.restart_timeout = 10
    
    # VerneMQ indicates other brokers that the connection
    # is established by a bridge instead of a normal client.
    # This can be turned off if needed:
    vmq_bridge.tcp.br0.try_private = off
    
    # Set the maximum number of outgoing messages the bridge will buffer
    # while not connected to the remote broker. Messages published while
    # the buffer is full are dropped. A value of 0 means buffering is
    # disabled.
    vmq_bridge.tcp.br0.max_outgoing_buffered_messages = 100
    topic [[[ out | in | both ] qos-level] local-prefix remote-prefix]
    # share messages in both directions and use QoS 1
    vmq_bridge.tcp.br0.topic.1 = /demo/+ both 1
    
    # import the $SYS tree of the remote broker and
    # prefix it with the string 'remote'
    vmq_bridge.tcp.br0.topic.2 = $SYS/* in remote
    vmq_bridge.ssl.br0 = 192.168.1.12:1883
    
    # set the username and password for the bridge connection
    vmq_bridge.ssl.br0.username = my_bridge_user
    vmq_bridge.ssl.br0.password = my_bridge_pwd
    
    # define the CA certificate file or the path to the
    # installed CA certificates
    vmq_bridge.ssl.br0.cafile = cafile.crt
    #or
    vmq_bridge.ssl.br0.capath = /path/to/cacerts
    
    # if the remote broker requires client certificate authentication
    vmq_bridge.ssl.br0.certfile = /path/to/certfile.pem
    # and the keyfile
    vmq_bridge.ssl.br0.keyfile = /path/to/keyfile
    
    # disable the verification of the remote certificate (defaults to 'off')
    vmq_bridge.ssl.br0.insecure = off
    
    # set the used tls version (defaults to 'tlsv1.2')
    vmq_bridge.ssl.br0.tls_version = tlsv1.2
    $ sudo vmq-admin plugin disable --name vmq_bridge
    $ sudo vmq-admin plugin enable --name vmq_bridge
    $ sudo vmq-admin bridge start
    listener.tcp.proxy_protocol = on
    listener.tcp.proxy_protocol_use_cn_as_username = on
    listener.ssl.require_certificate = on
    listener.ssl.use_identity_as_username = on
    ?LOG_ERROR("an error happened because: ~p", [Reason]).   % With macro
    logger:error("an error happened because: ~p", [Reason]). % Without macro
    on_deliver(UserName, SubscriberId, QoS, Topic, Payload, IsRetain, _Properties) ->
     ...

    REST Publisher

    VerneMQ provides a HTTP REST pub plugin for publishing messages using HTTP/REST. The http_pub plugin accepts HTTP POST requests containing message payloads, and then forwards those messages to the appropriate MQTT subscribers.

    The HTTP REST plugin can be used to publish messages from a wider range of devices and platforms, that may not support MQTT natively. Please note, while the plugin can handle a decent amount of requests, the primary protocol of VerneMQ is MQTT. Whenever possible, it is recommended to use MQTT natively to communicate with VerneMQ.

    Enabling the plugin

    The MQTTplugin (vmq_http_pub) is distributed with VerneMQ as an integrated plugin, but is not enabled by default. After configuring the plugin as described below, make sure to enable the plugin by setting (vernemq.conf):

    Configuration

    Bind plugin to HTTP(s) listener

    By default the plugin is not bound to any listener. It is recommended to use a dedicated HTTPS listener. For security, reasons the use of HTTPS instead of HTTP is preferred. It is possible to have more than one listener.

    This configuration defines an HTTPS listener for an application running on the server at IP address 127.0.0.1 and using port 3001. The listener is used to forward HTTP requests to vmq_http_pub.

    Additionally, this configuration sets the authentication method for the vmq_http_pub instance to API key (which is the default). This means that a valid API key is required to access this instance. The API key needs to have the scope httppub. You can create a new API key as follows:

    It is important to note that this configuration is only a part of a larger configuration file, and that other settings such as SSL certificates, encryption, protocol versions, etc. may also be defined to improve the security and performance of the HTTPS listener.

    Authentication and Authorization

    The plugin currently supports two authentication and authorization modes: "on-behalf-of" and "predefined". "On-behalf-of" means, that the client_id, user and password used for authentication and authorization is part of request (payload). Afterwards, the regular VerneMQ authentication and authorization flows are used. When using "predefined" the client, user, and password is bound to the plugin instance. It is recommended to use "on-behalf-of" and use a separate client_id, user and password for REST-based clients. For testing purposes, the plugin also supports the global allow_anonymous flag.

    For on-behalf-of authentication use:

    For predefined, please use a configuration similar to:

    If you configure a listener with "predefined" authorization, but provide authorization information (username, password, client_id) those will be ignored.

    MQTT Payload

    The plugin currently supports three different payload encodings:

    • JSON (regular and base64) in body

    • Header parameters, and payload in body

    • Query String parameters, and payload in body

    Which one to choose is depends on your application.

    JSON

    In order to allow more complex payload to be encoded as part of the json, the payload itself can be also be base64 encoded. The query string "encoding=base64" has to be used to indicate that the payload is base64 encoded. The encoding query string parameter can either be "base64" or "plain". Plain is the default.

    Header parameters

    Topic, user, password, qos, retain and user_properties can also be part of the HTTP header. The HTTP body is used for the actual message payload. The payload then does not need to be base64 encoded.

    The following header options are supported:

    Header
    Description

    Query String

    Topic, user, password, qos and retain flag can also be uurlencoded as part of the query string. The HTTP body is used for the actual message payload. There is no need to specify the encoding in the query string. Query String currently does not support user_properties.

    Examples

    All required information encoded in the payload

    All required information encoded in the payload (base64payload)

    MQTT information encoded in header parameters

    MQTT information encoded in query string

    Metrics

    The plugin exposes three metrics:

    • The number of messages sent through the REST Publish API

    • Number of errors reported by the REST Publish API

    • Number of Auth errors reported by the REST Publish API

    Misc Notes

    • The plugin allows the authentication and authorization flows to override mountpoint, max_message_size, qos and topic.

    • Currently, the regular (non m5) authentication and authorization flow is used.

    • The query string payload does not allow to set user parameters.

    • The plugin currently checks the maximum payload size before base64 decoding.

    Not a tuning guide

    General relation to OS configuration values

    You need to know about and configure a couple of Operating System and Erlang VM configs to operate VerneMQ efficiently. First, make sure you have set appropriate OS file limits according to our guide here. Second, when you run into performance problems, don't forget to check the settings in the vernemq.conf file. (Can't open more than 10k connections? Well, is the listener configured to open more than 10k?)

    TCP buffer sizes

    This is the number one topic to look at, if you need to keep an eye on RAM usage.

    Context: All network I/O in Erlang uses an internal driver. This driver will allocate and handle an internal application side buffer for every TCP connection. The default size of these buffers will determine your overall RAM use in VerneMQ. The sndbuf and recbuf of the TCP socket will not count towards VerneMQ RAM, but will be used by the Linux Kernel.

    VerneMQ calculates the buffer size from the OS level TCP send and receive buffers:

    val(buffer) >= max(val(sndbuf),val(recbuf))

    Those values correspond to net.ipv4.tcp_wmem and net.ipv4.tcp_rmem in your OS's sysctl configuration. One way to minimize RAM usage is therefore to configure those settings (Debian example):

    This would result in a 32KB application buffer for every connection.

    If your VerneMQ use case requires the use of different TCP buffer optimisations (per groups of clients for instance) you will have to make sure the that the Linux OS buffer configuration, namely net.ipv4.tcp_wmem and net.ipv4.tcp_rmemallows for this kind of flexibility, allowing for small TCP buffers and big TCP buffers at the same time.

    Example 1 above would allow VerneMQ to allocate minimal TCP read and write buffers of 4KB in the Linux Kernel, a max read buffer of 32KB in the kernel, and a max write buffer of 65KB in the kernel. VerneMQ itself would set its own internal per connection buffer to 65KB in addition.

    What we just described is VerneMQ automatically configuring TCP read and write buffers and internal buffers, deriving their values from OS settings.

    There are multiple additional ways to configure TCP buffers described below:

    Setting TCP buffer sizes globally within VerneMQ:

    If VerneMQ finds an advanced.config file, it will use the buffer sizes you have configured there for all it’s TCP listeners (and the TCP connections accepted by those listeners), except the Erlang distribution listeners within the cluster.

    (You'll find an example in the section below on the advanced.config )

    Per protocol (since 1.8.0):

    If VerneMQ finds a per protocol configuration (listener.tcp.buffer_sizes) in the vernemq.conf file, it will use those buffer sizes for the specific protocol. (currently only MQTT or MQTTS. Support for WS/WSS/HTTP/VMQ listeners is on the roadmap).

    For listener.tcp.buffer_sizes you’ll always have to state 3 values in bytes: the TCP receive buffer (recbuf), the TCP send buffer (sndbuf), and the internal application side buffer (buffer). You should set “buffer” (the 3rd value) toval(buffer) >= max(val(sndbuf),val(recbuf))

    Per listener (since 1.8.0):

    If VerneMQ finds per listener config values (listener.tcp.my_listener.buffer_sizes), it will use those buffer sizes for all connections setup by that specific listener. This is the most useful approach if you want to set specific different buffer sizes, like huge send buffers for listeners that accept massive consumers. (consumers with high expected message throughput).

    You would then configure a different listener for those massive consumers, and by that have the option to fine tune the TCP buffer sizes.

    For listener.tcp.my_listener.buffer_sizes you’ll always have to state 3 values in bytes: the TCP receive buffer (recbuf), the TCP send buffer (sndbuf), and an internal application side buffer (buffer). You should set “buffer” (the 3rd value) toval(buffer) >= max(val(sndbuf),val(recbuf))

    VerneMQ per single ClientID/or TCP connection:

    This scenario would be possible with a plugin.�

    The advanced.config file

    The advanced.config file is a supplementary configuration file that sits in the same directory as the vernemq.conf. You can set additional config values for any of the OTP applications that are part of a VerneMQ release. To just configure the TCP buffer size manually, you can create an advanced.config file:

    The vm.args file

    For very advanced & custom configurations, you can add a vm.args file to the same directory where the vernemq.conf file is located. Its purpose is to configure parameters for the Erlang Virtual Machine. This will override any Erlang specific parameters your might have configured via the vernemq.conf. Normally, VerneMQ auto-generates a vm.args file for every boot in /var/lib/vernemq/generated.configs/ (Debian package example) from vernemq.conf and other potential configuration sources.

    A manually generated vm.args is not supplementary, it is a full replacement of the auto-generated file! Keep that in mind. An easy way to go about this, is by copying and extending the auto-generated file.

    This is how a vm.args might look like:

    A note on TLS

    Using TLS will of course increase the CPU load during connection setup. Latencies in message delivery will be increased, and your overall message throughput per second will be lower.

    TLS will require considerably more RAM. Instead of 2 Erlang processes per connection, TLS will use 3. You'll have a session process, a queue process, and a TLS handler process that can encapsulate quite a big state (> 30KB).

    Erlang/OTP uses its own TLS implementation, only using OpenSSL for crypto, but not connection handling. For situations with high connection setup rate or overall high connection churn rate, the Erlang TLS implementation might be too slow. On the other hand, Erlang TLS gives you great concurrency & fault isolation for long-lived connections.

    Some Erlang deployments terminate SSL/TLS with an external component or with a load balancer component. Do some testing & try to find out what works best for you.

    The Erlang TLS implementation is rather picky on certificate chains & formats. Don't give up, if you encounter errors first. On Linux, you can find out more with the openssl s_client command quickly.

    VerneMQ on Kubernetes

    This guide describes how to deploy a VerneMQ cluster on Kubernetes

    Intro

    Kubernetes (K8s) is possibly the most mature technology for deploying Docker containers at scale. While running a single Docker container is supposed to be easy, running a Kubernetes cluster definitely isn't. That's why we recommended to work with a certified Kubernetes partner such as , , , or .

    If your applications already live in Docker containers and are deployed on Kubernetes it can be beneficial to also run VerneMQ on Kubernetes. This guide covers how to successfully deploy a VerneMQ cluster on Kubernetes. Multiple options exist to deploy a VerneMQ cluster at this point. This guide describes how to use the official Helm chart as well as the still experimental Kubernetes Operator.

    retain

    Boolean, true or false

    user_properties

    Json-style array

  • The verbs "put" and "post" are supported. There is no difference in functionality.

  • Content-Type

    application/json or application/octet-stream

    user

    User (on-behalf-authorization)

    password

    Password (on-behalf-authorization)

    client_id

    Client ID (on-behalf-authorization)

    topic

    Topic as string

    qos

    QoS (0,1,2)

    file
    plugins.vmq_http_pub = on
    listener.https.http_pub = 127.0.0.1:3001
    listener.https.http_pub.http_module.vmq_http_pub.auth.mode = apikey
    listener.https.http_pub.http_modules = vmq_http_pub
    vmq-admin api-key create scope=httppub
    listener.https.http_pub.http_modules.vmq_http_pub.mqtt_auth.mode = on-behalf-of
    listener.https.http_pub.http_modules.vmq_http_pub.mqtt_auth.mode = predefined
    listener.https.http_pub.http_modules.vmq_http_pub.mqtt_auth.user = restUser
    listener.https.http_pub.http_modules.vmq_http_pub.mqtt_auth.password = restPasswd
    listener.https.http_pub.http_modules.vmq_http_pub.mqtt_auth.client_id = restClient
    {
    	"topic": "testtopic/testtopic1",
    	"user": "testuser",
    	"password": "test123",
    	"qos": 1,
    	"retain": false,
    	"payload": "this is a payload string",
    	"user_properties": [{"a":"b"}]
    }
    curl --request POST \
      --url https://mqtt.myhost.example:3001/restmqtt/api/v1/publish \
      --header 'Authorization: Basic ...' \
      --header 'Content-Type: application/json' \
      --data '{
    	"topic": "T1",
    	"user": "myuser",
    	"password": "test123",
    	"client_id": "myclient",
    	"qos": 1,
    	"retain": false,
    	"payload": "asddsadsadas22dasasdsad",
    	"user_properties": [{"a":"b"}]
    }'
    curl --request POST \
      --url 'https://mqtt.myhost.example:3001/restmqtt/api/v1/publish?encoding=base64' \
      --header 'Authorization: Basic ...' \
      --header 'Content-Type: application/json' \
      --data '{
    	"topic": "a/b/c",
    	"user": "myuser",
    	"password": "test123",
    	"client_id": "myclient",
    	"qos": 1,
    	"retain": false,
    	"payload": "aGFsbG8gd2VsdA==",
    	"user_properties": [{"a":"b"}]
    }'
    curl --request POST \
      --url https://mqtt.myhost.example:3001/restmqtt/api/v1/publish \
      --header 'Authorization: Basic ...' \
      --header 'Content-Type: application/json' \
      --header 'QoS: 1' \
      --header 'clientid: myclient' \
      --header 'password: test123' \
      --header 'retain: false' \
      --header 'topic: T1' \
      --header 'user: myuser' \
      --header 'user_properties: [{"a":"b2"}]' \
      --data '{"hello": "world"}'
    curl --request POST \
      --url 'https://mqtt.myhost.example:3001/restmqtt/api/v1/publish?topic=a%2Fb%2Fc&user=test-user3&password=test123&client_id=test-client3&qos=0' \
      --header 'Authorization: Basic Og==' \
      --header 'Content-Type: application/json' \
      --data '{"Just a": "test"}'
    sudo sysctl -w net.ipv4.tcp_rmem="4096 16384 32768"
    sudo sysctl -w net.ipv4.tcp_wmem="4096 16384 32768"
    
    # Nope, these values are not recommendations!
    # You really need to decide yourself.
    Example 1 (from Linux OS config):
    net.ipv4.tcp_rmem="4096 16384 32768"
    net.ipv4.tcp_wmem="4096 16384 65536"
    Example 2 (vernemq.conf):
    listener.tcp.buffer_sizes = 4096,16384,32768
    Example 3: (vernemq.conf)
    listener.tcp.my_listener.buffer_sizes = 4096,16384,32768
    [{vmq_server, [
              {tcp_listen_options,
              [{sndbuf, 4096},
               {recbuf, 4096}]}]}].
    +P 256000
    -env ERL_MAX_ETS_TABLES 256000
    -env ERL_CRASH_DUMP /erl_crash.dump
    -env ERL_FULLSWEEP_AFTER 0
    -env ERL_MAX_PORTS 262144
    +A 64
    -setcookie vmq  # Important: Use your own private cookie... 
    -name [email protected]
    +K true
    +sbt db
    +sbwt very_long
    +swt very_low
    +sub true
    +Mulmbcs 32767
    +Mumbcgs 1
    +Musmbcs 2047
    # Nope, these values are not recommendations!
    # You really need to decide yourself, again ;)
    For the sake of clarity, this guide defines the following terms:
    • Kubernetes Node: A single virtual or physical machine in a Kubernetes cluster.

    • Kubernetes Cluster: A group of nodes firewalled from the internet, that are the primary compute resources managed by Kubernetes.

    • Edge router: A router that enforces the firewall policy for your cluster. This could be a gateway managed by a cloud provider or a physical piece of hardware.

    • Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.

    • Service: A Kubernetes Service that identifies a set of pods using label selectors. Unless mentioned otherwise, Services are assumed to have virtual IPs only routable within the cluster network

    • VerneMQ Cluster: A group of VerneMQ containers that are connected via the Erlang Distribution as well as the VerneMQ clustering mechanism.

    This guide assumes that you're familiar with Kubernetes

    Deploy VerneMQ with Helm

    Helm calls itself the package manager for Kubernetes. In Helm a package is called a chart. VerneMQ comes with such a Helm chart simplifying the initial setup tremendously. If you don't have setup Helm yet, please navigate through their quickstart guide.

    Once Helm is properly setup just run the following command in your shell.

    This will deploy a single node VerneMQ cluster. Have a look at the possible configuration here.

    Deploy VerneMQ using the Kubernetes Operator

    A Kubernetes Operator is a method of packaging, deploying and managing a Kubernetes application. The VerneMQ Operator is basically just a Pod with the task to deploy a VerneMQ cluster given a so called Custom Resource Definition (CRD). The VerneMQ CRD aims that all required configuration can be made through the CRD and no further configuration should be required. The following command installs the operator along a two node VerneMQ cluster into the namespace messaging

    This will result in the following Pods:

    And the following cluster status:

    At this point you would like to further configure authentication and authorization. The following port forwards may be useful at this point.

    kubectl port-forward svc/vernemq-k8s --namespace messaging 1883:1883 kubectl port-forward svc/vernemq-k8s --namespace messaging 8888:8888

    Load Balancing in Kubernetes

    In a VerneMQ cluster it doesn't matter to which node a MQTT client connects, subscribes or publishes. A VerneMQ cluster looks like one big MQTT broker to the outside. While this is the main idea of VerneMQ it comes with a cost, namely the data replication/synchronization overhead when 'persistent' clients hop from one pod to the other. As a consequence, we recommend to intelligently choose how to load balance your MQTT clients.

    Load balancing in Kubernetes is configured via the Service object. Multiple service types exist:

    ClusterIP

    The ClusterIP type is the default and only permits access from within the Kubernetes cluster. Other pods in the Kubernetes cluster can access VerneMQ via ClusterIP:Port . The underlying balancing strategy is based on the settings of kube-proxy. Also this type requires that one terminates TLS either in VerneMQ directly or via a different Pod e.g. HAproxy.

    NodePort

    The NodePort type uses ClusterIP under the hood but allocates a Port on every Kubernetes node and routes incoming traffic from NodeIP:NodePort to the ClusterIP:Port . Like with ClusterIP this type requires that one terminates TLS either in VerneMQ directly or via a different Pod e.g. HAproxy.

    Loadbalancer

    The Loadbalancer type uses an external load balancer provided by the cloud provider. In fact this Service type only provides the glue code required to interact with the Loadbalancing services from different cloud providers. If you're running a bare-metal Kubernetes cluster you won't be able to use this Service type, unless you deploy a Kubernetes aware network loadbalancer yourself. Check out MetalLB, which provides a network loadbalancer for bare-metal Kubernetes clusters.

    Every Kubernetes node runs a kube-proxy. kube-proxy maps virtual IP addresses to services and creates the required routes in the system so that pods can communicate with each other.

    kube-proxy supports multiple modes of operation:

    • userspace since v1.0

    • default since v1.2

    • stable since v1.11, only available if the Kernel of the Kubernetes node supports it.

    The performance and scalability characteristics of VerneMQ depend on the proxy-mode and the related configurations. This is especially true for load-balancing specific functionality such as session affinity. E.g. only ipvs supports an efficient way to provide session affinity via the source hashing strategy.

    Ingress Controllers

    Ingress controllers provide another way to do load balancing and TLS termination in a Kubernetes cluster. However the officially supported ingress controllers nginx and GCE focus on balancing HTTP requests instead of plain TCP connections. Therefore their support for TLS termination is also limited to HTTPS.

    Multiple third-party ingress controllers exist, however most of them focus on handling HTTP requests. One of the exceptions is Voyager by AppsCode an ingress controller based on HAProxy, which also efficiently terminates TLS.

    General recommendations for large scale deployments

    1. Use an external loadbalancer provided by the cloud provider that is capable of terminating TLS and apply a load balancing strategy that provides session affinity e.g. via source hashing.

    2. Terminate TLS outside VerneMQ.

    3. Configure the Pod NodeAffinity correctly to ensure that only one VerneMQ pod is scheduled on any Kubernetes cluster node.

    4. It's preferred to have a smaller number of Pods that are very powerful in terms of available CPU and RAM than the opposite.

    Amazon AWS EKS
    Google Cloud GKE
    Microsoft Azure AKS
    DigitalOcean

    HTTP API

    Everything you need to know to work with the VerneMQ HTTP administration interface

    The VerneMQ HTTP API is enabled by default and installs an HTTP handler on http://localhost:8888/api/v1. To read more about configuring the HTTP listener, see HTTP Listener Configuration. You can configure a HTTP listener, or a HTTPS listener to serve the HTTP API v1.

    Managing API keys

    The VerneMQ HTTP API uses basic authentication where an API key is passed as the username and the password is left empty, as an alternative the x-api-key header option can be used. API keys have a scope and (optional) can have an expiry date. So the first step to us the HTTP API is to create an API key.

    Scopes

    Each HTTP Module can be protected by an API key. An API key can be limited to a certain http module or further restrict some functionality within the http module. The scope used by the management API is "mgmt". Currently, the following scopes are supported "status", "mgmt", "metrics", "health".

    Create API key

    or with scope and an expiry date (in local time)

    The keys are persisted and available on all cluster nodes.

    List API keys

    To list existing keys do:

    Add API key

    To add an API key of your own choosing, do:

    Delete API key

    To delete an API key do:

    Advanced Settings (key rotation, key complexity)

    You can specifiy the minimal length of an API key (default: 0) in vernemq.conf

    or a set a max duration of an API key before it expires (default: undefined)

    Please note that changing those settings after some api keys have already been created has no influence on already created keys.

    You can enable or disable API key authentication per module, or per module per listener.

    Possible modules are vmq_metrics_http,vmq_http_mgmt_api, vmq_status_http, vmq_health_http. Possible values for auth.mode are noauth or apikey.

    API usage

    The VerneMQ HTTP API is a wrapper over the CLI tool, and anything that can be done using vmq-admin can be done using the HTTP API. Note that the HTTP API is therefore subject to any changes made to the vmq-admin tools and their flags & options structure. All requests are performed doing a HTTP GET and if no errors occurred an HTTP 200 OK code is returned with a possible non-empty JSON payload.

    The API is using basic auth where the API key is passed as the username. An example using curl would look like this:

    The mapping between vmq-admin and the HTTP API is straightforward, and if one is already familiar with how the vmq-admin tool works, working with the API should be easy. The mapping works such that the command part of a vmq-admin invocation is turned into a path, and the options and flags are turned into the query string.

    A mandatory parameter like the client-id in the vmq-admin session disconnect client-id=myclient command should be translated as: ?client-id=myclient.

    An optional flag like --cleanup in the vmq-admin session disconnect client-id=myclient --cleanup command should be translated as: &--cleanup

    Let's look at the cluster join command as an example, which looks like this:

    This turns into a GET request:

    To test, run it with curl:

    And the returned response would look like:

    Below are some other examples.

    Get cluster status information

    Request:

    Curl:

    Response:

    Retrieve session information

    Request:

    Curl:

    Response:

    List all installed listeners

    Request:

    Curl:

    Response:

    Retrieve plugin information

    Request:

    Curl:

    Response:

    Set configuration values

    Request:

    Curl:

    Response:

    Disconnect a client

    Request:

    Curl:

    Response:

    Change Open File Limits

    How to change the open file limits

    VerneMQ can consume a large number of open file handles when thousands of clients are connected as every connection requires at least one file handle.

    Most operating systems can change the open-files limit using the ulimit -n command. Example:

    However, this only changes the limit for the current shell session. Changing the limit on a system-wide, permanent basis varies more between systems.

    What will actually happen when VerneMQ runs out of OS-side file descriptors?

    In short, VerneMQ will be unable to function properly, because it can't open database files or accept incoming connections. In case you see exceptions with {error,emfile} in the VerneMQ log files, you now know what to do, though: increase the OS settings as described below.

    Linux

    On most Linux distributions, the total limit for open files is controlled by sysctl.

    An alternative way to read the file-max settings is:

    This might be high enough for your VerneMQ deployment, or not - we cannot know that. You will need at least 1 file descriptor per TCP connection, and VerneMQ needs additional file descriptors for file access etc. Also, if you have other components running on the system, you might want to consult the manpage for how to change that setting. The fs.file-max setting represents the global maximum of file handlers a Linux kernel will allocate. Make sure this is high enough for your system.

    Once you're good regarding file-max, you still need to configure the per-process open files limit. You'll set the number of file descriptors a single process or application like VerneMQ is allowed to grab. As every process belongs to a user, you need to bind the setting to a Linux user (here, the vernemq user). To do this, edit /etc/security/limits.conf, for which you'll need superuser access. If you installed VerneMQ from a binary package, add lines for the vernemq user, substituting your desired hard and soft limits:

    On Ubuntu, if you’re always relying on the init scripts to start VerneMQ, you can create the file /etc/default/vernemq and specify a manual limit:

    This file is automatically sourced from the init script, and the VerneMQ process started by it will properly inherit this setting. As init scripts are always run as the root user, there’s no need to specifically set limits in /etc/security/limits.conf if you’re solely relying on init scripts.

    On CentOS/RedHat systems, make sure to set a proper limit for the user you’re usually logging in with to do any kind of work on the machine, including managing VerneMQ. On CentOS, sudo properly inherits the values from the executing user.

    Linux and Systemd service files

    Newer VerneMQ packages use a systemd service file. You can adapt the LimitNOFILE setting in the vernemq.service file to the value you need. It is set to infinity by default already, so you only need to adapt it in case you want a lower value. The reason we need to enforce the setting is that systemd doesn't automatically take over the nofile settings from the OS.

    Enable PAM-Based Limits for Debian & Ubuntu

    It can be helpful to enable PAM user limits so that non-root users, such as the vernemq user, may specify a higher value for maximum open files. For example, follow these steps to enable PAM user limits and set the soft and hard values for all users of the system to allow for up to 65536 open files.

    Edit /etc/pam.d/common-session and append the following line:

    If /etc/pam.d/common-session-noninteractive exists, append the same line as above.

    Save and close the file.

    Edit /etc/security/limits.conf and append the following lines to the file:

    1. Save and close the file.

    2. (optional) If you will be accessing the VerneMQ nodes via secure shell (ssh), you should also edit /etc/ssh/sshd_config and uncomment the following line:

    and set its value to yes as shown here:

    1. Restart the machine so that the limits to take effect and verify

      that the new limits are set with the following command:

    Enable PAM-Based Limits for CentOS and Red Hat

    1. Edit /etc/security/limits.conf and append the following lines to

      the file:

    1. Save and close the file.

    2. Restart the machine so that the limits to take effect and verify that the new limits are set with the following command:

    In the above examples, the open files limit is raised for all users of the system. If you prefer, the limit can be specified for the vernemq user only by substituting the two asterisks (*) in the examples with vernemq.

    Solaris

    In Solaris 8, there is a default limit of 1024 file descriptors per process. In Solaris 9, the default limit was raised to 65536. To increase the per-process limit on Solaris, add the following line to /etc/system:

    Reference:

    Mac OS X

    To check the current limits on your Mac OS X system, run:

    The last two columns are the soft and hard limits, respectively.

    To adjust the maximum open file limits in OS X 10.7 (Lion) or newer, edit /etc/launchd.conf and increase the limits for both values as appropriate.

    For example, to set the soft limit to 16384 files, and the hard limit to 32768 files, perform the following steps:

    1. Verify current limits:

      The response output should look something like this:

    2. Edit (or create) /etc/launchd.conf and increase the limits. Add lines that look like the following (using values appropriate to your environment):

    Attributions

    This work, "Open File Limits", is a derivative of Open File Limits by Riak, used under Creative Commons Attribution 3.0 Unported License. "Open File Limits" is licensed under Creative Commons Attribution 3.0 Unported License by Erlio GmbH.

    helm install vernemq/vernemq
    curl -L https://codeload.github.com/vernemq/vmq-operator/zip/master --output repo.zip; \
    unzip -j repo.zip '*/examples/only_vernemq/*' -d only_vernemq; \
    kubectl apply -f only_vernemq
    kubectl get pods --namespace messaging
    NAME                                      READY   STATUS        RESTARTS   AGE
    vernemq-k8s-0                             1/1     Running       0          53m
    vernemq-k8s-1                             1/1     Running       0          4m14s
    vernemq-k8s-deployment-59f5684549-s7jd4   1/1     Running       0          2d17h
    vmq-operator-76f5f78f96-2jbwt             1/1     Running       0          4m28s
    kubectl exec vernemq-k8s-0 vmq-admin cluster show --namespace messaging
    +-----------------------------------------------------------------+-------+
    |                              Node                               |Running|
    +-----------------------------------------------------------------+-------+
    |vmq@vernemq-k8s-0.vernemq-k8s-service.messaging.svc.cluster.local| true  |
    |vmq@vernemq-k8s-1.vernemq-k8s-service.messaging.svc.cluster.local| true  |
    +-----------------------------------------------------------------+-------+
    ulimit -n 262144
    iptables
    ipvs
    vmq-admin
    Save the file, and restart the system for the new limits to take effect. After restarting, verify the new limits with the launchctl limit command:

    The response output should look something like this:

    sysctl manpage
    $ vmq-admin api-key create
    JxctXkZ1OTVnlwvguSCE9KtujacMkOLF
    $ vmq-admin api-key create scope=mgmt expires=2023-04-04T12:00:00
    q85i5HbFCDdAVLNJuOj48QktDbchvOMS
    $ vmq-admin api-key show
    +----------------------------------+-------+---------------------+-------------+
    | Key                              | Scope | Expires (UTC)       | has expired |
    +----------------------------------+-------+---------------------+-------------+
    | q85i5HbFCDdAVLNJuOj48QktDbchvOMS | mgmt  | 2023-04-04 10:00:00 | false       |
    +----------------------------------+-------+---------------------+-------------+
    | JxctXkZ1OTVnlwvguSCE9KtujacMkOLF | mgmt  | never               | false       |
    +----------------------------------+-------+---------------------+-------------+
    vmq-admin api-key add key=mykey
    vmq-admin api-key delete key=JxctXkZ1OTVnlwvguSCE9KtujacMkOLF
    min_apikey_length = 30
    max_apikey_expiry_days = 180
    http_module.$module.auth.mode 
    listener.http.$name.http_module.$module.auth.mode
    listener.https.$name.http_module.$module.auth.mode
    curl "http://JxctXkZ1OTVnlwvguSCE9KtujacMkOLF@localhost:8888/api/v1/session/show"
    vmq-admin cluster join [email protected]
    GET /api/v1/cluster/[email protected]
    curl "http://JxctXkZ1OTVnlwvguSCE9KtujacMkOLF@localhost:8888/api/v1/cluster/[email protected]"
    {
        "text": "Done",
        "type": "text"
    }
    GET /api/v1/cluster/show
    curl "http://JxctXkZ1OTVnlwvguSCE9KtujacMkOLF@localhost:8888/api/v1/cluster/show"
    {
       "type" : "table",
       "table" : [
          {
             "Running" : true,
             "Node" : "[email protected]"
          }
       ]
    }
    GET /api/v1/session/show
    curl "http://JxctXkZ1OTVnlwvguSCE9KtujacMkOLF@localhost:8888/api/v1/session/show"
    {
       "type" : "table",
       "table" : [
          {
             "user" : "client1",
             "peer_port" : 50402,
             "is_online" : true,
             "mountpoint" : "",
             "client_id" : "mosq/qJpvoqe1PA4lBN1e4E",
             "peer_host" : "127.0.0.1"
          },
          {
             "user" : "client2",
             "is_online" : true,
             "peer_port" : 50406,
             "peer_host" : "127.0.0.1",
             "client_id" : "mosq/tikkXdlM28PaznBv2T",
             "mountpoint" : ""
          }
       ]
    }
    
    GET /api/v1/listener/show
    curl "http://JxctXkZ1OTVnlwvguSCE9KtujacMkOLF@localhost:8888/api/v1/listener/show"
    {
       "type" : "table",
       "table" : [
          {
             "max_conns" : 10000,
             "port" : "8888",
             "mountpoint" : "",
             "ip" : "127.0.0.1",
             "type" : "http",
             "status" : "running"
          },
          {
             "status" : "running",
             "max_conns" : 10000,
             "port" : "44053",
             "mountpoint" : "",
             "ip" : "0.0.0.0",
             "type" : "vmq"
          },
          {
             "max_conns" : 10000,
             "port" : "1883",
             "mountpoint" : "",
             "ip" : "127.0.0.1",
             "type" : "mqtt",
             "status" : "running"
          }
       ]
    }
    GET /api/v1/plugin/show
    curl "http://JxctXkZ1OTVnlwvguSCE9KtujacMkOLF@localhost:8888/api/v1/plugin/show"
        {
       "type" : "table",
       "table" : [
          {
             "Hook(s)" : "auth_on_register\n",
             "Plugin" : "vmq_passwd",
             "M:F/A" : "vmq_passwd:auth_on_register/5\n",
             "Type" : "application"
          },
          {
             "Type" : "application",
             "M:F/A" : "vmq_acl:auth_on_publish/6\nvmq_acl:auth_on_subscribe/3\n",
             "Plugin" : "vmq_acl",
             "Hook(s)" : "auth_on_publish\nauth_on_subscribe\n"
          }
       ]
    }
    GET /api/v1/set?allow_publish_during_netsplit=on
    curl "http://JxctXkZ1OTVnlwvguSCE9KtujacMkOLF@localhost:8888/api/v1/set?allow_publish_during_netsplit=on"
    []
    GET /api/v1/session/disconnect?client-id=myclient&--cleanup
    curl "http://JxctXkZ1OTVnlwvguSCE9KtujacMkOLF@localhost:8888/api/v1/session/disconnect?client-id=myclient&--cleanup"
    []
    launchctl limit
    cpu         unlimited      unlimited
    filesize    unlimited      unlimited
    data        unlimited      unlimited
    stack       8388608        67104768
    core        0              unlimited
    rss         unlimited      unlimited
    memlock     unlimited      unlimited
    maxproc     709            1064
    maxfiles    16384          32768
    sysctl fs.file-max
    fs.file-max = 262144
    cat /proc/sys/fs/file-max
    vernemq soft nofile 65536
    vernemq hard nofile 262144
    ulimit -n 262144
    LimitNOFILE=infinity
    session    required   pam_limits.so
    *               soft     nofile          65536
    *               hard     nofile          262144
    #UseLogin no
    UseLogin yes
    ulimit -a
    *               soft     nofile          65536
    *               hard     nofile          262144
    ulimit -a
    set rlim_fd_max=262144
    launchctl limit maxfiles
    launchctl limit
    cpu         unlimited      unlimited
    filesize    unlimited      unlimited
    data        unlimited      unlimited
    stack       8388608        67104768
    core        0              unlimited
    rss         unlimited      unlimited
    memlock     unlimited      unlimited
    maxproc     709            1064
    maxfiles    10240          10240
    limit maxfiles 16384 32768

    Auth using a database

    VerneMQ supports multiple ways to authenticate and authorize new client connections using a database.

    Introduction and general setup

    VerneMQ supports authentication and authorization using a number of popular databases and the below sections describe how to configure the different databases.

    The database drivers are handled using the vmq_diversity plugin and it therefore needs to be enabled:

    The vmq_diversity plugin makes it possible to extend VerneMQ using Lua. The documentation can be found .

    When using database based authentication/authorization the enabled-by-default file based authentication and authorization are most likely not needed and should be disabled:

    You must set allow_anonymous = off, otherwise VerneMQ won't use the database plugin for authentication and authorization.

    In order to use a database for authentication and authorization the database must be properly configured and the auth-data (username, clientid, password, acls) to be present. The following sections show some sample requests that can be used to insert such data.

    While the handling of authentication differs among the different databases, the handling of ACLs is roughly identical and make use of a JSON array containing one or many ACL objects per configured client.

    The database integrations will cache the ACLs when the client connects avoiding expensive database lookups for each publish or subscribe message. The cache entries are evicted when the client disconnects.

    A minimal publish & subscribe ACL JSON object takes the following form:

    General ACL

    The pattern is a MQTT topic string that can contain MQTT wildcards, but also the template variables %m (mountpoint), %u (username), and %c (client id) which are automatically substituted with the auth data provided.

    Publish ACL

    The publish ACL makes it possible to control the maximum QoS and payload size that is allowed, and if the message is allowed to be retained.

    Moreover, the publish ACL makes it possible to modify the properties of a published message through specifying one or multiple modifiers. Please note that the modified message isn't re-validated by the ACL.

    Subscribe ACL

    The subscribe ACL makes it possible to control the maximum QoS a client is allowed to subscribe to.

    Like the publish ACL, the subscribe ACL makes it possible to change the current subscription request by returning a custom set of topic/qos pairs. Please note that the modified subscription isn't re-validated by the ACL.

    Password verification and hashing methods

    When deciding on which database to use one has to consider which kind of password hashing and key derivation functions are available and required. Different databases provide different mechanisms, for example PostgreSQL provides the pgcrypto module which supports verifying hashed and salted passwords, while Redis has no such features. VerneMQ therefore also provides client-side password verification mechanisms such as bcrypt.

    There is a trade-off between verifying passwords on the client-side versus on the server-side. Verifying passwords client-side of course means doing the computations on the VerneMQ broker and this takes away resources from other tasks such as routing messages. With hashing functions such as bcrypt which are designed specifically to be slow (proportional to the number of rounds) in order to make brute-force attacks infeasible, this can become a problem. For example, if verifying a password with bcrypt takes 0.5 seconds then on a single threaded core 2 verifications/second are possible and using 4 single threaded cores 8 verifications/second. So, the number of rounds/security parameters have a direct impact on the max number of verifications/second and hence also the maximum arrival rate of new clients per second.

    For each database it is specified which password verification mechanisms are available and if they are client-side or server-side.

    Note that currently bcrypt version `2a` (prefix `$2a$`) is supported.

    PostgreSQL

    To enable PostgreSQL authentication and authorization the following need to be configured in the vernemq.conf file:

    In case your Postgresql database requires SSL, you'll have to tell the plugin:

    Consult the vernemq.conf file for more info about additional options:

    PostgreSQL hashing methods:

    method
    client-side
    server-side

    Creating the Postgres tables

    The following SQL DDL must be applied, the pgcrypto extension is required if using the server-side crypt hashing method:

    To enter new ACL entries use a query similar to the following:

    CockroachDB

    To enable CockroachDB authentication and authorization the following need to be configured in the vernemq.conf file:

    Notice that if the CockroachDB installation is secure, then TLS is required. If using an insecure installation without TLS, then vmq_diversity.cockroachdb.ssl can be set to off.

    CockroachDB hashing methods:

    method
    client-side
    server-side

    Creating the CockroachDB tables

    The following SQL DDL must be applied:

    To enter new ACL entries use a query similar to the following, the example is for the bcrypt hashing method:

    MySQL

    For MySQL authentication and authorization configure the following in vernemq.conf:

    MySQL hashing methods:

    method
    client-side
    server-side

    It should be noted that all the above options stores unsalted passwords which are vulnerable to rainbow table attacks, so the threat-model should be considered carefully when using these. Also note the methods marked with * are no longer considered secure hashes.

    Creating the MySQL tables

    The following SQL DDL must be applied:

    To enter new ACL entries use a query similar to the following, the example uses PASSWWORD to for password hashing:

    Note, the PASSWORD() hashing method needs to be changed according to the configuration set in vmq_diversity.mysql.password_hash_method, it supports the options password, md5, sha1 and sha256. Learn more about the MySQL equivalent for those methods on .

    The default

    MongoDB

    For MongoDB authentication and authorization configure the following in vernemq.conf:

    VerneMQ supports MongoDB's DNS SRV record lookup to fetch a seed list. Specify the hostname of hosted database as a srv option instead of host and port. VerneMQ will randomly choose a host/port combination from the seed list returned in the DNS SRV record. MongoDB SRV connections use TLS by default. You will need to configure TLS support for MongoDB for most SRV connections.

    MongoDB supports a number of node types in replica sets. The built-in MongoDB support simply connects to the host and port specified. It does not differentiate between primary or secondary nodes in MongoDB replica sets.

    MongoDB hashing methods:

    method
    client-side
    server-side

    Insert the ACL using the mongo shell or any software library. The passhash property contains the bcrypt hash of the clients password.

    Redis

    For Redis authentication and authorization configure the following in vernemq.conf:

    Redis hashing methods:

    method
    client-side
    server-side

    Insert the ACL using the redis-cli shell or any software library. The passhash property contains the bcrypt hash of the clients password. The key is an encoded JSON array containing the mountpoint, username, and client id. Note that no spaces are allowed between the array items.

    plugins.vmq_diversity = on
    password
    method has been deprecated since MySQL 5.7.6 and not usable with MySQL 8.0.11+. Also, the MySQL authentication method
    caching_sha2_password
    is not supported. This is the default in MySQL 8.0.4 and later, so you need to add:
    default_authentication_plugin=mysql_native_password
    under
    [mysqld]
    in e.g.
    /etc/mysql/my.cnf
    .

    bcrypt

    ✓

    crypt

    ✓

    bcrypt

    ✓

    sha256

    ✓

    sha256

    ✓

    md5*

    ✓

    sha1*

    ✓

    password

    bcrypt

    ✓

    bcrypt

    ✓

    here
    https://dev.mysql.com/doc/refman/8.0/en/encryption-functions.html

    ✓

    plugins.vmq_passwd = off
    plugins.vmq_acl = off
    {
        "pattern": "a/+/c"
    }
    {
        "pattern": "a/+/c",
        "max_qos": 2,
        "max_payload_size": 128,
        "allowed_retain": true
    }
    {
        "pattern": "a/+/c",
        "max_qos": 2,
        "max_payload_size": 128,
        "allowed_retain": true,
        "modifiers": {
            "topic": "new/topic",
            "payload": "new payload",
            "qos": 2,
            "retain": true,
            "mountpoint": "other-mountpoint"
        }
    }
    {
        "pattern": "a/+/c",
        "max_qos": 2
    }
    {
        "pattern": "a/+/c",
        "max_qos": 2,
        "modifiers": [
            ["new/topic/1", 1],
            ["new/topic/2", 1]
        ]
    }
    vmq_diversity.auth_postgres.enabled = on
    vmq_diversity.postgres.host = 127.0.0.1
    vmq_diversity.postgres.port = 5432
    vmq_diversity.postgres.user = vernemq
    vmq_diversity.postgres.password = vernemq
    vmq_diversity.postgres.database = vernemq_db
    vmq_diversity.postgres.password_hash_method = crypt
    vmq_diversity.ssl.enabled = on
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    vmq_diversity.auth_postgres.enabled = off
    
    ## 
    ## Default: localhost
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.postgres.host = localhost
    
    ## 
    ## Default: 5432
    ## 
    ## Acceptable values:
    ##   - an integer
    ## vmq_diversity.postgres.port = 5432
    
    ## 
    ## Default: root
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.postgres.user = root
    
    ## 
    ## Default: password
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.postgres.password = password
    
    ## 
    ## Default: vernemq_db
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.postgres.database = vernemq_db
    
    ## Specify if the postgresql driver should use TLS or not.
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    vmq_diversity.postgres.ssl = off
    
    ## The cafile is used to define the path to a file containing
    ## the PEM encoded CA certificates that are trusted.
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - the path to a file
    ## vmq_diversity.postgres.cafile = ./etc/cafile.pem
    
    ## Set the path to the PEM encoded server certificate.
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - the path to a file
    ## vmq_diversity.postgres.certfile = ./etc/cert.pem
    
    ## Set the path to the PEM encoded key file.
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - the path to a file
    ## vmq_diversity.postgres.keyfile = ./etc/keyfile.pem
    
    ## The password hashing method to use in PostgreSQL:
    ## 
    ## Default: crypt
    ## 
    ## Acceptable values:
    ##   - one of: crypt, bcrypt
    vmq_diversity.postgres.password_hash_method = crypt
    CREATE EXTENSION pgcrypto;
    CREATE TABLE vmq_auth_acl
     (
       mountpoint character varying(10) NOT NULL,
       client_id character varying(128) NOT NULL,
       username character varying(128) NOT NULL,
       password character varying(128),
       publish_acl json,
       subscribe_acl json,
       CONSTRAINT vmq_auth_acl_primary_key PRIMARY KEY (mountpoint, client_id, username)
     );
    WITH x AS (
        SELECT
            ''::text AS mountpoint,
               'test-client'::text AS client_id,
               'test-user'::text AS username,
               '123'::text AS password,
               gen_salt('bf')::text AS salt,
               '[{"pattern": "a/b/c"}, {"pattern": "c/b/#"}]'::json AS publish_acl,
               '[{"pattern": "a/b/c"}, {"pattern": "c/b/#"}]'::json AS subscribe_acl
        )
    INSERT INTO vmq_auth_acl (mountpoint, client_id, username, password, publish_acl, subscribe_acl)
        SELECT
            x.mountpoint,
            x.client_id,
            x.username,
            crypt(x.password, x.salt),
            publish_acl,
            subscribe_acl
        FROM x;
    vmq_diversity.auth_cockroachdb.enabled = on
    vmq_diversity.cockroachdb.host = 127.0.0.1
    vmq_diversity.cockroachdb.port = 26257
    vmq_diversity.cockroachdb.user = vernemq
    vmq_diversity.cockroachdb.password = vernemq
    vmq_diversity.cockroachdb.database = vernemq_db
    vmq_diversity.cockroachdb.ssl = on
    vmq_diversity.cockroachdb.password_hash_method = bcrypt
    CREATE TABLE vmq_auth_acl
     (
       mountpoint character varying(10) NOT NULL,
       client_id character varying(128) NOT NULL,
       username character varying(128) NOT NULL,
       password character varying(128),
       publish_acl json,
       subscribe_acl json,
       CONSTRAINT vmq_auth_acl_primary_key PRIMARY KEY (mountpoint, client_id, username)
     );
    WITH x AS (
        SELECT
            ''::text AS mountpoint,
               'test-client1'::text AS client_id,
               'test-user1'::text AS username,
               '$2a$12$97PlnSsouvCV7HaxDPV80.EXfsKM4Fg7DAwWhSbGJ6O5CpNep20n2'::text AS hash,
               '[{"pattern": "a/b/c"}, {"pattern": "c/b/#"}]'::json AS publish_acl,
               '[{"pattern": "a/b/c"}, {"pattern": "c/b/#"}]'::json AS subscribe_acl
        )
    INSERT INTO vmq_auth_acl (mountpoint, client_id, username, password, publish_acl, subscribe_acl)
        SELECT
            x.mountpoint,
            x.client_id,
            x.username,
            x.hash,
            publish_acl,
            subscribe_acl
        FROM x;
    vmq_diversity.auth_mysql.enabled = on
    vmq_diversity.mysql.host = 127.0.0.1
    vmq_diversity.mysql.port = 3306
    vmq_diversity.mysql.user = vernemq
    vmq_diversity.mysql.password = vernemq
    vmq_diversity.mysql.database = vernemq_db
    vmq_diversity.mysql.password_hash_method = password
    CREATE TABLE vmq_auth_acl
    (
      mountpoint VARCHAR(10) NOT NULL,
      client_id VARCHAR(128) NOT NULL,
      username VARCHAR(128) NOT NULL,
      password VARCHAR(128),
      publish_acl TEXT,
      subscribe_acl TEXT,
      CONSTRAINT vmq_auth_acl_primary_key PRIMARY KEY (mountpoint, client_id, username)
    )
    INSERT INTO vmq_auth_acl
        (mountpoint, client_id, username,
         password, publish_acl, subscribe_acl)
    VALUES
        ('', 'test-client', 'test-user', PASSWORD('123'),
         '[{"pattern":"a/b/c"},{"pattern":"c/b/#"}]',
         '[{"pattern":"a/b/c"},{"pattern":"c/b/#"}]');
    vmq_diversity.auth_mongodb.enabled = on
    vmq_diversity.mongodb.host = 127.0.0.1
    vmq_diversity.mongodb.port = 27017
    # vmq_diversity.mongodb.login =
    # vmq_diversity.mongodb.password =
    # vmq_diversity.mongodb.database =
    vmq_diversity.auth_mongodb.enabled = on
    vmq_diversity.mongodb.srv = vmqtest.08t1b.mongodb.net
    vmq_diversity.mongodb.login = username
    vmq_diversity.mongodb.password = secretpass
    # vmq_diversity.mongodb.database =
    db.vmq_acl_auth.insert({
        mountpoint: '',
        client_id: 'test-client',
        username: 'test-user',
        passhash: '$2a$12$WDzmynWSMRVzfszQkB2MsOWYQK9qGtfjVpO8iBdimTOjCK/u6CzJK',
        publish_acl: [
            {pattern: 'a/b/c'},
            {pattern: 'a/+/d'}
        ],
        subscribe_acl: [
            {pattern: 'a/#'}
        ]
    })
    vmq_diversity.auth_redis.enabled = on
    vmq_diversity.redis.host = 127.0.0.1
    vmq_diversity.redis.port = 6379
    # vmq_diversity.redis.user = "default"
    # vmq_diversity.redis.password =
    # vmq_divserity.redis.database = 0
    SET "[\"\",\"test-client\",\"test-user\"]" "{\"passhash\":\"$2a$12$WDzmynWSMRVzfszQkB2MsOWYQK9qGtfjVpO8iBdimTOjCK/u6CzJK\",\"subscribe_acl\":[{\"pattern\":\"a/+/c\"}]}"

    Lua Scripting Support

    Learn how to implement VerneMQ plugins using the Lua Scripting Language.

    Developing VerneMQ plugins in Erlang is the most powerful way to extend the functionality of a VerneMQ broker but might be a barrier for developers not familiar with Erlang. For this reason, we've implemented a VerneMQ extension that allows you to develop plugins using the . This extension is called vmq_diversity and is shipped as part of VerneMQ.

    vmq_diversity uses the , which is an implementation of Lua 5.2 in pure Erlang instead of the official Lua interpreter.

    Moreover vmq_diversity provides simple Lua libraries to communicate with MySQL, PostgreSQL, MongoDB, and Redis within your Lua VerneMQ plugins. An additional Json encoding/decoding library as well as a generic HTTP client library provide your Lua scripts a great way to talk to external services.

    Configuration

    To enable vmq_diversity make sure to set:

    To specify a script to load when VerneMQ starts can be done like this:

    It is also possible to dynamically load a Lua script using vmq-admin:

    To reload a script after a change:

    If the vmq_diversity plugin is enabled the folder ./share/lua folder is scanned for Lua scripts to automatically load during startup. The automatic load folder can be configured in the vernemq.conf file by changing the vmq_diversity.script setting.

    Implementing a VerneMQ plugin

    A VerneMQ plugin typically consists of one or more implemented VerneMQ hooks. We tried to keep the differences between the traditional Erlang based and Lua based plugins as small as possible. Please check out the Plugin Development Guide for more information about the different flows and a description of the different hooks.

    Your first Lua plugin

    Let's start with a first very basic example that implements a basic authentication and authorization scheme.

    It is also possible to try the next plugin in the chain (see: Chaining) by returning next instead of false.

    Data Providers

    This subsection describes the data providers currently available to a Lua script. Every data provider is backed by a connection pool that has to be configured by your script.

    MySQL

    ensure_pool

    Ensures that the connection pool named config.pool_id is setup in the system. The config argument is a Lua table holding the following keys:

    • pool_id: Name of the connection pool (mandatory).

    • size: Size of the connection pool (default is 5).

    • user: MySQL account name for login

    • password: MySQL account password for login (in clear text).

    • host: Host name for the MySQL server (default is localhost)

    • port: Port that the MySQL server is listening on (default is 3306).

    • database: MySQL database name.

    • encoding: Encoding (default is latin1)

    This call throws a badarg error in case it cannot setup the pool otherwise it returns true.

    execute

    Executes the provided SQL statement using a connection from the connection pool.

    • pool_id: Name of the connection pool to use for this statement.

    • stmt: A valid MySQL statement.

    • args...: A variable number of arguments can be passed to substitute statement parameters.

    Depending on the statement this call returns true or false or a Lua array containing the resulting rows (as Lua tables). In case the statement cannot be executed a badarg error is thrown.

    PostgreSQL

    ensure_pool

    Ensures that the connection pool named config.pool_id is setup in the system. The config argument is a Lua table holding the following keys:

    • pool_id: Name of the connection pool (mandatory).

    • size: Size of the connection pool (default is 5).

    • user: Postgres account name for login

    • password: Postgres account password for login (in clear text).

    • host: Host name for the Postgres server (default is localhost)

    • port: Port that the Postgres server is listening on (default is 5432).

    • database: Postgres database name.

    This call throws a badarg error in case it cannot setup the pool otherwise it returns true.

    execute

    Executes the provided SQL statement using a connection from the connection pool.

    • pool_id: Name of the connection pool to use for this statement.

    • stmt: A valid MySQL statement.

    • args...: A variable number of arguments can be passed to substitute statement parameters.

    Depending on the statement this call returns true or false or a Lua array containing the resulting rows (as Lua tables). In case the statement cannot be executed a badarg error is thrown.

    MongoDB

    ensure_pool

    Ensures that the connection pool named config.pool_id is setup in the system. The config argument is a Lua table holding the following keys:

    • pool_id: Name of the connection pool (mandatory).

    • size: Size of the connection pool (default is 5).

    • login: MongoDB login name

    • password: MongoDB password for login.

    • host: Host name for the MongoDB server (default is localhost)

    • port: Port that the MongoDB server is listening on (default is 27017).

    • database: MongoDB database name.

    • w_mode: Set mode for writes either to "unsafe" or "safe".

    • r_mode: Set mode for reads either to "master" or "slave_ok".

    This call throws a badarg error in case it cannot setup the pool otherwise it returns true.

    insert

    Insert the provided document (or list of documents) into the collection.

    • pool_id: Name of the connection pool to use for this statement.

    • collection: Name of a MongoDB collection.

    • doc_or_docs: A single Lua table or a Lua array containing multiple Lua tables.

    The provided document can set the document id using the _id key. If the id isn't provided one gets autogenerated. The call returns the inserted document(s) or throws a badarg error if it cannot insert the document(s).

    update

    Updates all documents in the collection that match the given selector.

    • pool_id: Name of the connection pool to use for this statement.

    • collection: Name of a MongoDB collection.

    • selector: A single Lua table containing the filter properties.

    • doc: A single Lua table containing the update properties.

    The call returns true or throws a badarg error if it cannot update the document(s).

    delete

    Deletes all documents in the collection that match the given selector.

    • pool_id: Name of the connection pool to use for this statement.

    • collection: Name of a MongoDB collection.

    • selector: A single Lua table containing the filter properties.

    The call returns true or throws a badarg error if it cannot delete the document(s).

    find

    Finds all documents in the collection that match the given selector.

    • pool_id: Name of the connection pool to use for this statement.

    • collection: Name of a MongoDB collection.

    • selector: A single Lua table containing the filter properties.

    • args: A Lua table that currently supports an optional projector=LuaTable element.

    The call returns a MongoDB cursor or throws a badarg error if it cannot setup the iterator.

    next

    Fetches next available document given a cursor object obtained via find.

    The call returns the next available document or false if all documents have been fetched.

    take

    Fetches the next nr_of_docs documents given a cursor object obtained via find.

    The call returns a Lua array containing the documents or false if all documents have been fetched.

    close

    Closes and cleans up a cursor object obtained via find.

    The call returns true.

    find_one

    Finds the first document in the collection that matches the given selector.

    • pool_id: Name of the connection pool to use for this statement.

    • collection: Name of a MongoDB collection.

    • selector: A single Lua table containing the filter properties.

    • args: A Lua table that currently supports an optional projector=LuaTable element.

    The call returns the matched document or false in case no document was found.

    Redis

    ensure_pool

    Ensures that the connection pool named config.pool_id is setup in the system. The config argument is a Lua table holding the following keys:

    • pool_id: Name of the connection pool (mandatory).

    • size: Size of the connection pool (default is 5).

    • password: Redis password for login.

    • host: Host name for the Redis server (default is localhost)

    • port: Port that the Redis server is listening on (default is 6379).

    • database: Redis database (default is 0).

    This call throws a badarg error in case it cannot setup the pool otherwise it returns true.

    cmd

    Executes the given Redis command.

    • pool_id: Name of the connection pool

    • command: Redis command string.

    • args...: Extra args.

    This call returns a Lua table, true, false, or nil. In case it cannot parse the command a badarg error is thrown.

    Memcached

    ensure_pool

    Ensures that the pool named config.pool_id is setup in the system, The config argument is a lua table holding the following keys:

    • pool_id: Name of the connection pool (mandatory).

    • size: Size of the connection pool (default is 5).

    • host: Host name for the Memcached server (default is localhost)

    • port: Port that the Redis server is listening on (default is 11211).

    This call throws a badarg error in case it cannot setup the pool otherwise it returns true.

    flush_all(pool_id)

    Flushes all data from the Memcached server. Use with care.

    Returns true.

    get(pool_id, key)

    Get data for key key.

    Returns the data for the key and otherwise false.

    set(pool_id, key, value, expiration)

    Unconditionally set a value for a key.

    • key: Key.

    • value: Value.

    • expiration time until key/value pair is deleted in seconds. This

      parameter is optional with default value 0 (no expiration).

    Returns value.

    add(pool_id, key, value, expiration)

    Add a key/value pair if the key doesn't already exist.

    • key: Key.

    • value: Value.

    • expiration time until key/value pair is deleted in seconds. This

      parameter is optional with default value 0 (no expiration).

    Returns value if key didn't already exist, false otherwise.

    replace(pool_id, key, value, expiration)

    Replace a key/value pair if the key already exists.

    • key: Key.

    • value: Value.

    • expiration time until key/value pair is deleted in seconds. This

      parameter is optional with default value 0 (no expiration).

    Returns value if key already exists, false otherwise.

    delete(pool_id, key)

    Delete key and the associated value.

    Returns true if the key/value pair was deleted, false otherwise

    HTTP and Json Client Libraries

    HTTP Client

    ensure_pool

    Ensures that the connection pool named config.pool_id is setup in the system. The config argument is a Lua table holding the following keys:

    • pool_id: Name of the connection pool (mandatory).

    • size: Size of the connection pool (default is 10).

    This call throws a badarg error in case it cannot setup the pool otherwise it returns true.

    get, put, post, delete

    Executes a HTTP request with the given url and args.

    • url: A valid http url.

    • body: optional body to be included in the request.

    • headers: optional Lua table containing extra headers to be included in the request.

    This call returns false in case of an error or a Lua table of the form:

    body

    Fetches the response body given a client ref obtained via the response Lua table.

    This call returns false in case of an error or the response body.

    JSON

    encode

    Encodes a Lua value to a JSON string.

    This call returns false if it cannot encode the given value.

    decode

    Decodes a JSON string to a Lua value.

    This call returns false if it cannot decode the JSON string.

    Logger

    Uses the VerneMQ logging infrastructure to log the given log_string.

    Lua scripting language
    Luerl Project
    plugins.vmq_diversity = on
    vmq_diversity.myscript1.file = Path/to/Script.lua
    $ vmq-admin script load path=/Abs/Path/To/script.lua
    $ vmq-admin script reload path=/Abs/Path/To/script.lua
    -- the function that implements the auth_on_register/5 hook
    -- the reg object contains everything required to authenticate a client
    --      reg.addr: IP Address e.g. "192.168.123.123"
    --      reg.port: Port e.g. 12345
    --      reg.mountpoint: Mountpoint e.g. ""
    --      reg.username: UserName e.g. "test-user"
    --      reg.password: Password e.g. "test-password"
    --      reg.client_id: ClientId e.g. "test-id"
    --      reg.clean_session: CleanSession Flag true
    function my_auth_on_register(reg)
        -- only allow clients connecting from this host
        if reg.addr == "192.168.10.10" then
            --only allow clients with this username 
            if reg.username == "demo-user" then
                -- only allow clients with this clientid
                if reg.client_id == "demo-id" then
                    return true
                end
            end
        end
        return false
    end
    
    -- the function that implements the auth_on_publish/6 hook
    -- the pub object contains everything required to authorize a publish request
    --      pub.mountpoint: Mountpoint e.g. ""
    --      pub.client_id: ClientId e.g. "test-id"
    --      pub.topic: Publish Topic e.g. "test/topic"
    --      pub.qos: Publish QoS e.g. 1
    --      pub.payload: Payload e.g. "hello world"
    --      pub.retain: Retain flag e.g. false
    function my_auth_on_publish(pub)
        -- only allow publishes on this topic with QoS = 0
        if pub.topic == "demo/topic" and pub.qos == 0 then
            return true
        end
        return false
    end
    
    -- the function that implements the auth_on_subscribe/3 hook
    -- the sub object contains everything required to authorize a subscribe request
    --      sub.mountpoint: Mountpoint e.g. ""
    --      sub.client_id: ClientId e.g. "test-id"
    --      sub.topics: A list of Topic/QoS Pairs e.g. { {"topic/1", 0}, {"topic/2, 1} }
    function my_auth_on_subscribe(sub)
        local topic = sub.topics[1]
        if topic then
            -- only allow subscriptions for the topic "demo/topic" with QoS = 0
            if topic[1] == "demo/topic" and topic[2] == 0 then
                return true
            end
        end
        return false
    end
    
    -- the hooks table specifies which hooks this plugin is implementing
    hooks = {
        auth_on_register = my_auth_on_register,
        auth_on_publish = my_auth_on_publish,
        auth_on_subscribe = my_auth_on_subscribe
    }
    mysql.ensure_pool(config)
    mysql.execute(pool_id, stmt, args...)
    postgres.ensure_pool(config)
    postgres.execute(pool_id, stmt, args...)
    mongodb.ensure_pool(config)
    mongodb.insert(pool_id, collection, doc_or_docs)
    mongodb.update(pool_id, collection, selector, doc)
    mongodb.delete(pool_id, collection, selector)
    mongodb.find(pool_id, collection, selector, args)
    mongodb.next(cursor)
    mongodb.take(cursor, nr_of_docs)
    mongodb.close(cursor)
    mongodb.find_one(pool_id, collection, selector, args)
    redis.ensure_pool(config)
    redis.cmd(pool_id, command, args...)
    memcached.ensure_pool(config)
    http.ensure_pool(config)
    http.get(pool_id, url, body, headers)
    http.put(pool_id, url, body, headers)
    http.post(pool_id, url, body, headers)
    http.delete(pool_id, url, body, headers)
    response = {
        status = HTTP_STATUS_CODE,
        headers = Lua Table containing response headers,
        ref = Client Ref
    }
    http.body(client_ref)
    json.encode(val)
    json.decode(json_string)
    log.info(log_string)
    log.error(log_string)
    log.warning(log_string)
    log.debug(log_string)

    Webhooks

    How to implement VerneMQ plugins using a HTTP/HTTPS interface

    The VerneMQ Webhooks plugin provides an easy and flexible way to build powerful plugins for VerneMQ using web hooks. With VerneMQ Webhooks you are free to select the implementation language to match your technical requirements or the language in which you feel comfortable and productive in. You can use any modern language such as Python, Go, C#/.Net and indeed any language in which you can build something that can handle HTTP(s) requests.

    The idea of VerneMQ Webhooks very simple: you can register an HTTP(s) endpoint with a VerneMQ plugin hook and whenever the hook (such as auth_on_register) is called, the VerneMQ Webhooks plugin dispatches a HTTP post request to the registered endpoint. The HTTP post request contains a HTTP header like vernemq-hook: auth_on_register and a JSON encoded payload. The endpoint then responds with code 200 on success and with a JSON encoded payload informing the VerneMQ Webhooks plugin which action to take (if any).

    Configuring webhooks

    To enable webhooks make sure to set:

    And then each webhook can be configured like this:

    It is possible to have the webhooks plugin omit sending the payload for the and webhooks by setting the no_payload config:

    It is also possible to dynamically register webhooks at run-time:

    See which endpoints are registered:

    And finally deregistering an endpoint:

    We recommend placing the endpoint implementation locally on each VerneMQ node such that each request can go over localhost without being subject to network issues.

    HTTPS

    In case your WebHooks backend requires HTTPS, you can configure the VerneMQ internal HTTP client to do so as well. There are various options you can set in the vernemq.conf file:

    Check the file for quick documentation on those options or to look up their configured defaults.

    Connection pool configuration

    Each registered hook uses by default a connection pool containing maximally 100 connections. This can be changed by setting vmq_webhooks.pool_max_connections to a different value. Similarly the vmq_webhooks.pool_timeout configuration (value is in milliseconds) can be set to control how long an unused connection should stay in the connection pool before being closed and removed. The default value is 60000 (60 seconds).

    Caching

    VerneMQ webhooks support caching of the auth_on_register, auth_on_publish, auth_on_subscribe, auth_on_register_m5, auth_on_publish_m5 and auth_on_subscribe_m5 hooks.

    This can be used to speed up authentication and authorization tremendously. All data passed to these hooks is used to look if the call is in the cache, except in the case of the auth_on_publish and auth_on_publish_m5 where the payload is omitted.

    To enable caching for an endpoint simply return the cache-control: max-age=AgeInSeconds in the response headers to one of the mentioned hooks. If the call was successful (authentication granted), the request will be cached together with any modifiers, except for the payload modifier in the auth_on_publish hook.

    Whenever a non-expired entry is looked up in the cache the endpoint will not be called and the modifiers of the cached entry will be returned, if any.

    It is possible to inspect the cache using:

    Cache entries are currently not actively disposed after expiry and will remain in memory.

    Webhook specs

    All webhooks are called with method POST. All hooks need to be answered with the HTTP code 200 to be considered successful. Any hook called that does not return the 200 code will be logged as an error as will any hook with an unparsable payload.

    All hooks are called with the header vernemq-hook which contains the name of the hook in question.

    For detailed information about the hooks and when they are called, see the sections , and .

    Note, when overriding a mountpoint or a client-id both have to be returned by the webhook implementation for it to have an effect.

    Responses

    All hooks, unless stated otherwise, respond with a JSON-encoded payload and a success code of 200. All hooks support responding with "ok", indicated that the request was successful.

    Other possible responses are "next", meaning that the next callback should be tried.

    Errors, e.g. authentication failures, are returned by a an "error" payload, either with the predefined "not_allowed"

    or some other error text:

    auth_on_register

    Header: vernemq-hook: auth_on_register

    Webhook example payload:

    Additionally, to the standard "ok" response. It is also possible to override various client specific settings by returning an array of modifiers:

    Note, the retry_interval is in milliseconds. It is possible to override many more settings, see the for more information.

    Other possible responses are next and error (not_allowed).

    auth_on_subscribe

    Header: vernemq-hook: auth_on_subscribe

    Webhook example payload:

    An example where where the topics to subscribe have been rewritten looks like:

    Note, you can also pass a qos with value 128 which means it was either not possible or the client was not allowed to subscribe to that specific question.

    Other possible responses are "next" and "error".

    auth_on_publish

    Header: vernemq-hook: auth_on_publish

    Note, in the example below the payload is not base64 encoded which is not the default.

    Webhook example payload:

    A complex example where the publish topic, qos, payload and retain flag is rewritten looks like:

    Other possible responses are "next" and "error".

    on_register

    Header: vernemq-hook: on_register

    Webhook example payload:

    The response should be an empty json object {}.

    on_publish

    Header: vernemq-hook: on_publish

    Note, in the example below the payload is not base64 encoded which is not the default.

    Webhook example payload:

    The response should be an empty json object {}.

    on_subscribe

    Header: vernemq-hook: on_subscribe

    Webhook example payload:

    The response should be an empty json object {}.

    on_unsubscribe

    Header: vernemq-hook: on_unsubscribe

    Webhook example payload:

    Example response:

    Other possible responses are "next" and "error".

    on_deliver

    Header: vernemq-hook: on_deliver

    Note, in the example below the payload is not base64 encoded which is not the default.

    Webhook example payload:

    Example response:

    An other possible response is "next".

    on_offline_message

    Header: vernemq-hook: on_offline_message

    Note, in the example below the payload is not base64 encoded which is not the default.

    Webhook example payload:

    The response should be an empty json object {}.

    on_client_wakeup

    Header: vernemq-hook: on_client_wakeup

    Webhook example payload:

    The response should be an empty json object {}.

    on_client_offline

    Header: vernemq-hook: on_client_offline

    Webhook example payload:

    The response should be an empty json object {}.

    on_client_gone

    Header: vernemq-hook: on_client_gone

    Webhook example payload:

    The response should be an empty json object {}.

    auth_on_register_m5

    Header: vernemq-hook: auth_on_register_m5

    Webhook example payload:

    It is also possible to override various client specific settings by returning an array of modifiers:

    Note, the retry_interval is in milliseconds. It is possible to override many more settings, see the for more information.

    Other possible responses are "next" and "error".

    on_auth_m5

    Header vernemq-hook: on_auth_m5

    Webhook example payload:

    Note, as the authentication data is binary data it is base64 encoded.

    A minimal response indicating the authentication was successful looks like:

    If authentication were to continue for another round a reason code with value 24 (Continue Authentication) should be returned instead. See also the relevant in the MQTT 5.0 specification.

    auth_on_subscribe_m5

    Header: vernemq-hook: auth_on_subscribe_m5

    Webhook example payload:

    An example where where the topics to subscribe have been rewritten looks like:

    Note, the forbidden/topic has been rejected with the qos value of 135 (Not authorized).

    Other possible responses are "next" and "error".

    auth_on_publish_m5

    Header: vernemq-hook: auth_on_publish_m5

    Note, in the example below the payload is not base64 encoded which is not the default.

    Webhook example payload:

    A response where the publish topic has been rewritten:

    Other possible responses are "next" and "error" (not_allowed).

    on_register_m5

    Header: vernemq-hook: on_register_m5

    Webhook example payload:

    The response should be an empty json object {}.

    on_publish_m5

    Header: vernemq-hook: on_publish_m5

    Note, in the example below the payload is base64 encoded .

    Webhook example payload:

    The response should be an empty json object {}.

    on_subscribe_m5

    Header: vernemq-hook: on_subscribe_m5

    Webhook example payload:

    Note, the qos value of 128 (Unspecified error) means the subscription was rejected.

    The response should be an empty json object {}.

    on_unsubscribe_m5

    Header: vernemq-hook: on_unsubscribe_m5

    Webhook example payload:

    Example response:

    It supports the standard "OK" response, as well "next".

    on_deliver_m5

    Header: vernemq-hook: on_deliver_m5

    Note, in the example below the payload is not base64 encoded which is not the default.

    Webhook example payload:

    It supports the standard "OK" response, as well "next" and "error".

    Example Webhook in Python

    Below is a very simple example of an endpoint implemented in Python. It uses the web and json modules and implements handlers for six different hooks: auth_on_register, auth_on_publish, auth_on_subscribe, auth_on_register_m5, auth_on_publish_m5 and auth_on_subscribe_m5.

    The auth_on_register hook only restricts access only to the user with username joe and password secret. It also shows how to cache the result. The auth_on_subscribe and auth_on_publish hooks allow any subscription or publish to continue as is. These last two hooks are needed as the default policy is deny.

    Python Code

    Configuration

    The following configuration can be used for testing the Python example.

    auth_on_publish
    auth_on_publish_m5
    WebHooks Schema
    Session Lifecycle
    Subscribe Flow
    Publish Flow
    Session Lifecycle
    Session Lifecycle
    section
    plugins.vmq_webhooks = on
    vmq_webhooks.mywebhook1.hook = auth_on_register
    vmq_webhooks.mywebhook1.endpoint = http://127.0.0.1/myendpoints
    vmq_webhooks.mywebhook1.no_payload = on
    $ vmq-admin webhooks register hook=auth_on_register endpoint="http://localhost"
    $ vmq-admin webhooks show
    $ vmq-admin webhooks deregister hook=auth_on_register endpoint="http://localhost"
    vmq_webhooks.cafile
    vmq_webhooks.tls_version
    vmq_webhooks.verify_peer
    vmq_webhooks.depth
    vmq_webhooks.certfile
    vmq_webhooks.use_crls
    vmq_webhooks.keyfile
    vmq_webhooks.keyfile_password
    $ vmq-admin webhooks cache show
    {
        "result": "ok"
    }
    {
      "result": "next"
    }
    {
      "result": {
        "error": "not_allowed"
      }
    }
    {
      "result": {
        "error": "some_error_message"
      }
    }
    {
        "peer_addr": "127.0.0.1",
        "peer_port": 8888,
        "username": "username",
        "password": "password",
        "mountpoint": "",
        "client_id": "clientid",
        "clean_session": false
    }
    {
        "result": "ok",
        "modifiers": {
            "max_message_size": 65535,
            "max_inflight_messages": 10000,
            "retry_interval": 20000
        }
    }
    {
        "client_id": "clientid",
        "mountpoint": "",
        "username": "username",
        "topics":
            [{"topic": "a/b",
              "qos": 1},
             {"topic": "c/d",
              "qos": 2}]
    }
    {
        "result": "ok",
        "topics":
            [{"topic": "rewritten/topic",
              "qos": 0}]
    }
    {
        "username": "username",
        "client_id": "clientid",
        "mountpoint": "",
        "qos": 1,
        "topic": "a/b",
        "payload": "hello",
        "retain": false
    }
    {
        "result": "ok",
        "modifiers": {
            "topic": "rewritten/topic",
            "qos": 2,
            "payload": "rewritten payload",
            "retain": true
        }
    }
    {
        "peer_addr": "127.0.0.1",
        "peer_port": 8888,
        "username": "username",
        "mountpoint": "",
        "client_id": "clientid"
    }
    {
        "username": "username",
        "client_id": "clientid",
        "mountpoint": "",
        "qos": 1,
        "topic": "a/b",
        "payload": "hello",
        "retain": false
    }
    {
        "client_id": "clientid",
        "mountpoint": "",
        "username": "username",
        "topics":
            [{"topic": "a/b",
              "qos": 1},
             {"topic": "c/d",
              "qos": 2}]
    }
    {
        "username": "username",
        "client_id": "clientid",
        "mountpoint": "",
        "topics":
            ["a/b", "c/d"]
    }
    {
        "result": "ok",
        "topics":
            ["rewritten/topic"]
    }
    {
        "username": "username",
        "client_id": "clientid",
        "mountpoint": "",
        "topic": "a/b",
        "payload": "hello"
    }
    {
      "result": "ok",
      "modifiers":
      {
            "topic": "rewritten/topic",
            "payload": "rewritten payload"
        }
    }
    {
        "client_id": "clientid",
        "mountpoint": "",
        "qos": "1",
        "topic": "sometopic",
        "payload": "payload",
        "retain": false
    }
    {
        "client_id": "clientid",
        "mountpoint": ""
    }
    {
        "client_id": "clientid",
        "mountpoint": ""
    }
    {
        "client_id": "clientid",
        "mountpoint": ""
    }
    {
        "peer_addr": "127.0.0.1",
        "peer_port": 8888,
        "mountpoint": "",
        "client_id": "client-id",
        "username": "username",
        "password": "password",
        "clean_start": true,
        "properties": {}
    }
    {
        "result": "ok",
        "modifiers": {
            "max_message_size": 65535,
            "max_inflight_messages": 10000
        }
    }
    {
        "username": "username",
        "mountpoint": "",
        "client_id": "client-id",
        "properties": {
          "p_authentication_data": "QVVUSF9EQVRBMA==",
          "p_authentication_method": "AUTH_METHOD"
        }
    }
      "modifiers": {
        "properties": {
          "p_authentication_data": "QVVUSF9EQVRBMQ==",
          "p_authentication_method": "AUTH_METHOD"
        }
        "reason_code": 0
      },
      "result": "ok"
    }
    {
        "username": "username",
        "mountpoint": "",
        "client_id": "client-id",
        "topics": [
          {
            "topic": "test/topic",
            "qos": 1
          }
        ],
        "properties": {}
      }
    {
        "modifiers": {
            "topics": [
                {
                    "qos": 2,
                    "topic": "rewritten/topic"
                },
                {
                    "qos": 135,
                    "topic": "forbidden/topic"
                }
            ]
        },
        "result": "ok"
    }
    {
        "username": "username",
        "mountpoint": "",
        "client_id": "client-id",
        "qos": 1,
        "topic": "some/topic",
        "payload": "message payload",
        "retain": false,
        "properties": {
        }
    }
    {
        "modifiers": {
            "topic": "rewritten/topic"
        },
        "result": "ok"
    }
    {
        "peer_addr": "127.0.0.1",
        "peer_port": 8888,
        "mountpoint": "",
        "client_id": "client-id",
        "username": "username",
        "properties": {
        }
    }
    {
        "username": "username",
        "mountpoint": "",
        "client_id": "client-id",
        "qos": 1,
        "topic": "test/topic",
        "payload": "message payload",
        "retain": false,
        "properties": {
        }
    }
    {
        "username": "username",
        "mountpoint": "",
        "client_id": "client-id",
        "topics": [
            {
                "topic": "test/topic",
                "qos": 1
            },
            {
                "topic": "test/topic",
                "qos": 128
            }
        ],
        "properties": {
        }
    }
    {
        "username": "username",
        "mountpoint": "",
        "client_id": "client-id",
        "topics": [
            "test/topic"
        ],
        "properties": {
        }
    }
    {
        "modifiers": {
            "topics": [
                "rewritten/topic"
            ]
        },
        "result": "ok"
    }
    {
        "username": "username",
        "mountpoint": "",
        "client_id": "client-id",
        "topic": "test/topic",
        "payload": "message payload",
        "properties": {
        }
    }
    import web
    import json
    
    urls = ('/.*', 'hooks')
    app = web.application(urls, globals())
    
    class hooks:
        def POST(self):
    
            # fetch hook and request data
            hook = web.ctx.env.get('HTTP_VERNEMQ_HOOK')
            data = json.loads(web.data())
    
            # print the hook and request data to the console
            print
            print ('hook:', hook)
            print ('  data: ', data)
    
            # dispatch to appropriate function based on the hook.
            if hook == 'auth_on_register':
                return handle_auth_on_register(data)
            elif hook == 'auth_on_register_m5':
                return handle_auth_on_register(data)
            elif hook == 'auth_on_publish':
                return handle_auth_on_publish(data)
            elif hook == 'auth_on_publish_m5':
                return handle_auth_on_publish(data)
            elif hook == 'auth_on_subscribe':
                return handle_auth_on_subscribe(data)
            elif hook == 'auth_on_subscribe_m5':
                return handle_auth_on_subscribe(data)
            else:
                web.ctx.status = 501
                return "not implemented"
    
    def handle_auth_on_register(data):
        # Cache example
        web.header('cache-control', 'max-age=30')
        # only allow user 'joe' with password 'secret', reject all others.
        if "joe" == data['username']:
            if "secret" == data['password']:
                return json.dumps({'result': 'ok'})  
        return json.dumps({'result': {'error': 'not allowed'}})
    
    def handle_auth_on_publish(data):
        # accept all publish requests
        return json.dumps({'result': 'ok'})
    
    def handle_auth_on_subscribe(data):
        # accept all subscribe requests
        return json.dumps({'result': 'ok'})
    
    if __name__ == '__main__':
        app.run()
    plugins.vmq_webhooks = on
    # auth_on_register
    vmq_webhooks.webhook1.hook = auth_on_register
    vmq_webhooks.webhook1.endpoint = http://127.0.0.1:8080
    
    # auth_on_subscribe
    vmq_webhooks.webhook2.hook = auth_on_subscribe
    vmq_webhooks.webhook2.endpoint = http://127.0.0.1:8080
    
    # auth_on_register_m5
    vmq_webhooks.webhook3.hook = auth_on_register_m5
    vmq_webhooks.webhook3.endpoint = http://127.0.0.1:8080
    
    # auth_on_subscribe_m5
    vmq_webhooks.webhook4.hook = auth_on_subscribe_m5
    vmq_webhooks.webhook4.endpoint = http://127.0.0.1:8080

    The VerneMQ conf file

    A closer look at an example vernemq.conf file (Note: This is a work-in-progress section)

    VerneMQ is usually configured by editing a single config file called vernemq.conf. The config file will be generated by the make rel process building a release, and it will also come with the binary VerneMQ packages.

    In the vernemq.conf file you will find keys and values (sometimes out-commented), with some short documentation. Some values are hidden, that is you won't find them in the auto-generated conf file. Those are meant to be added to the conf file manually. Typically, hidden values aren't the most used configuration values.

    Here's a full vernemq.conf template, as generated by the 2.0.0. release. It is a long file, but luckily you won't need to change every value!

    ## Allow anonymous users to connect, default is 'off'. !!NOTE!!
    ## Enabling this completely disables authentication of the clients and
    ## should only be used for testing/development purposes or in case
    ## clients are authenticated by some other means.
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    allow_anonymous = off
    
    ## Allow new client connections even when a VerneMQ cluster is inconsistent.
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    allow_register_during_netsplit = off
    
    ## Allow message publishs even when a VerneMQ cluster is inconsistent.
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    allow_publish_during_netsplit = off
    
    ## Allow new subscriptions even when a VerneMQ cluster is inconsistent.
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    allow_subscribe_during_netsplit = off
    
    ## Allow clients to unsubscribe when a VerneMQ cluster is inconsistent.
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    allow_unsubscribe_during_netsplit = off
    
    ## Client registrations can be either happen in a coordinated or
    ## uncoordinated fashion. Uncoordinated registrations are faster and
    ## will cause other clients with the same client-id to be eventually
    ## disconnected, while coordinated ensures that any other client with
    ## the same client-id will be immediately disconnected.
    ## 
    ## Default: on
    ## 
    ## Acceptable values:
    ##   - on or off
    coordinate_registrations = on
    
    ## Secret to be used for crendentials obfuscation. Default is "random" which
    ## generates a random string.
    ## 
    ## Default: random
    ## 
    ## Acceptable values:
    ##   - text
    logging.obfuscation_secret = random
    
    ## Client disconnect due to keepalive is by default a warning. In unstable networks
    ## it might be "expected" behaviour to have a lot of those warnings. This allows to
    ## downgrade the warning to an info message.
    ## 
    ## Default: on
    ## 
    ## Acceptable values:
    ##   - on or off
    logging.keepalive_as_warning = on
    
    ## Set the time in seconds VerneMQ waits before a retry, in case a (QoS=1 or QoS=2) message
    ## delivery gets no answer.
    ## 
    ## Default: 20
    ## 
    ## Acceptable values:
    ##   - an integer
    ## retry_interval = 20
    
    ## Set the maximum size for client IDs. MQTT v3.1 specifies a
    ## limit of 23 characters
    ## 
    ## Default: 100
    ## 
    ## Acceptable values:
    ##   - an integer
    ## max_client_id_size = 100
    
    ## This option allows persistent clients ( = clean session set to
    ## false) to be removed if they do not reconnect within 'persistent_client_expiration'.
    ## This is a non-standard option. As far as the MQTT specification is concerned,
    ## persistent clients persist forever.
    ## The expiration period should be an integer followed by one of 'd', 'w', 'm', 'y' for
    ## day, week, month, and year.
    ## 
    ## Default: never
    ## 
    ## Acceptable values:
    ##   - text
    ## persistent_client_expiration = 1w
    
    ## The maximum delay for a last will message. This setting
    ## applies only to MQTTv5 sessions and can be used to override the
    ## value provided by the client.
    ## The delay can be either 'client' which means the value specified by
    ## the client is used, or an integer followed by one of 's', 'h' 'd',
    ## 'w', 'm', 'y' for day, week, month, and year used to cap the value
    ## provided by the client..
    ## 
    ## Default: client
    ## 
    ## Acceptable values:
    ##   - text
    ## max_last_will_delay = client
    
    ## The maximum number of QoS 1 or 2 messages that can be in the process of being
    ## transmitted simultaneously. This includes messages currently going through handshakes
    ## and messages that are being retried. Defaults to 20. Set to 0 for no maximum. If set
    ## to 1, this will guarantee in-order delivery of messages.
    ## Note: for MQTT v5, use receive_max_client/receive_max_broker to implement
    ## similar behaviour.
    ## 
    ## Default: 20
    ## 
    ## Acceptable values:
    ##   - an integer
    max_inflight_messages = 20
    
    ## The maximum number of messages to hold in the queue above
    ## those messages that are currently in flight. Defaults to 1000. This affects
    ## messages of any QoS. Set to -1 for no maximum (not recommended).
    ## This option allows to control how a specific client session can deal
    ## with message bursts. As a general rule of thumb set
    ## this number a bit higher than the expected message rate a single consumer is
    ## required to process. Note that setting this value to 0 will totally block
    ## delivery from any queue.
    ## 
    ## Default: 1000
    ## 
    ## Acceptable values:
    ##   - an integer
    max_online_messages = 1000
    
    ## The maximum number of QoS 1 or 2 messages to hold in the offline queue.
    ## Defaults to 1000. Set to -1 for no maximum (not recommended). Set to 0
    ## if no messages should be stored offline.
    ## 
    ## Default: 1000
    ## 
    ## Acceptable values:
    ##   - an integer
    max_offline_messages = 1000
    
    ## Allows a session that changes from offline to online to override the maximum
    ## online message count (max_online_messages). All offlines messages will be added
    ## to the online queue.
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    override_max_online_messages = off
    
    ## This option sets the maximum MQTT size that VerneMQ will
    ## allow.  Messages that exceed this size will not be accepted by
    ## VerneMQ. The default value is 0, which means that all valid MQTT
    ## messages are accepted. MQTT imposes a maximum payload size of
    ## 268435455 bytes.
    ## 
    ## Default: 0
    ## 
    ## Acceptable values:
    ##   - an integer
    max_message_size = 0
    
    ## If a message is published with a QoS lower than the QoS of the subscription it is
    ## delivered to, VerneMQ can upgrade the outgoing QoS. This is a non-standard option.
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    upgrade_outgoing_qos = off
    
    ## listener.tcp.buffer_sizes is an list of three integers
    ## (sndbuf,recbuf,buffer) specifying respectively the kernel TCP send
    ## buffer, the kernel TCP receive buffer and the user-level buffer
    ## size in the erlang driver.
    ## It is recommended to have val(user-level buffer) >= val(receive
    ## buffer) to avoid performance issues because of unnecessary copying.
    ## If not set, the operating system defaults are used.
    ## This option can be set on the protocol level by:
    ## - listener.tcp.buffer_sizes
    ## - listener.ssl.buffer_sizes
    ## or on the listener level by:
    ## - listener.tcp.my_tcp_listener.buffer_sizes
    ## - listener.ssl.my_ssl_listener.buffer_sizes
    ## 
    ## Acceptable values:
    ##   - text
    ## listener.tcp.buffer_sizes = 4096,16384,32768
    
    ## listener.max_connection_lifetime is an integer defining the maximum lifetime
    ## of MQTT connection in seconds. This option can be overridden on the protocol level by:
    ## - listener.tcp.max_connection_lifetime
    ## - listener.ssl.max_connection_lifetime
    ## - listener.ws.max_connection_lifetime
    ## - listener.wss.max_connection_lifetime
    ## or on the listener level by:
    ## - listener.tcp.my_tcp_listener.max_connection_lifetime
    ## - listener.ssl.my_ssl_listener.max_connection_lifetime
    ## - listener.ws.my_ws_listener.max_connection_lifetime
    ## - listener.wss.my_wss_listener.
    ## This is an implementation of MQTT security proposal:
    ## "Servers may close the Network Connection of Clients and require them to re-authenticate with new credentials."
    ## 
    ## Default: 0
    ## 
    ## Acceptable values:
    ##   - an integer
    listener.max_connection_lifetime = 0
    
    ## listener.max_connections is an integer or 'infinity' defining
    ## the maximum number of concurrent connections. This option can be overridden
    ## on the protocol level by:
    ## - listener.tcp.max_connections
    ## - listener.ssl.max_connections
    ## - listener.ws.max_connections
    ## - listener.wss.max_connections
    ## or on the listener level by:
    ## - listener.tcp.my_tcp_listener.max_connections
    ## - listener.ssl.my_ssl_listener.max_connections
    ## - listener.ws.my_ws_listener.max_connections
    ## - listener.wss.my_wss_listener.max_connections
    ## 
    ## Default: 10000
    ## 
    ## Acceptable values:
    ##   - an integer
    ##   - the text "infinity"
    listener.max_connections = 10000
    
    ## Set the maximum frame in bytes that a WebSocket connection is allowed to
    ## send. If the client tries to send more in one frame, the server will disconnect it.
    ## 
    ## Default: 268435456
    ## 
    ## Acceptable values:
    ##   - an integer
    ##   - the text "infinity"
    max_ws_frame_size = 268435456
    
    ## Set the nr of acceptors waiting to concurrently accept new connections.
    ## This can be specified either on the protocol level:
    ## - listener.tcp.nr_of_acceptors
    ## - listener.ssl.nr_of_acceptors
    ## - listener.ws.nr_of_acceptors
    ## - listener.wss.nr_of_acceptors
    ## or on the listener level:
    ## - listener.tcp.my_tcp_listener.nr_of_acceptors
    ## - listener.ssl.my_ssl_listener.nr_of_acceptors
    ## - listener.ws.my_ws_listener.nr_of_acceptors
    ## - listener.wss.my_wss_listener.nr_of_acceptors
    ## 
    ## Default: 10
    ## 
    ## Acceptable values:
    ##   - an integer
    listener.nr_of_acceptors = 10
    
    ## listener.tcp.<name> is an IP address and TCP port that
    ## the broker will bind to. You can define multiple listeners e.g:
    ## - listener.tcp.default = 127.0.0.1:1883
    ## - listener.tcp.internal = 127.0.0.1:10883
    ## - listener.tcp.my_other_listener = 127.0.0.1:10884
    ## This also works for SSL listeners and WebSocket handlers:
    ## - listener.ssl.default = 127.0.0.1:8883
    ## - listener.ws.default = 127.0.0.1:800
    ## - listener.wss.default = 127.0.0.1:880
    ## 
    ## Default: 127.0.0.1:1883
    ## 
    ## Acceptable values:
    ##   - an IP/port pair, e.g. 127.0.0.1:10011
    ##   - a Unix Domain Socket, e.g. local:/var/run/app.sock:0
    listener.tcp.name = 127.0.0.1:1883
    
    ## 
    ## Acceptable values:
    ##   - an IP/port pair, e.g. 127.0.0.1:10011
    ## listener.ssl.name = 127.0.0.1:8883
    
    ## 'listener.tcp.my_listener.allow_anonymous_override' configures whether
    ## this listener is allowed to override the global allow_anonymous setting.
    ## The setting has one single purpose: to give a listener the capability to switch off
    ## all authentication plugins. (that is override a global allow_anonymous=off with a per-listener allow_anonymous=on).
    ## Specifically, it can allow TLS listeners to disable internal authentication (using only client certificates as
    ## authentication) while keeping all the other MQTT listeners safe.
    ## global | listener | Result for listener:  (on = anonymous access allowed)
    ## on	  | on	     | on
    ## off    | on	     | on
    ## off    | off      | off
    ## on     | off      | on
    ## Both values are simply OR'ed together. Please note that this does not allow you to globally allow anonymous access, and
    ## then selectively switch off single listeners!
    ## - listener.tcp.my_listener.allow_anonymous_override
    ## - listener.ssl.my_listener.allow_anonymous_override
    ## Allowed values are 'on' or 'off'. The default value for an unconfigured listener will be 'off'.
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    listener.tcp.name.allow_anonymous_override = off
    
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    listener.ssl.name.allow_anonymous_override = off
    
    ## 'listener.tcp.allowed_protocol_versions' configures which
    ## protocol versions are allowed for an MQTT listener. The allowed
    ## protocol versions can be specified the tcp, websocket or ssl level:
    ## - listener.tcp.allowed_protocol_versions
    ## - listener.ws.allowed_protocol_versions
    ## - listener.wss.allowed_protocol_versions
    ## - listener.ssl.allowed_protocol_versions
    ## or for a specific listener:
    ## - listener.tcp.my_tcp_listener.allowed_protocol_versions
    ## - listener.ws.my_ws_listener.allowed_protocol_versions
    ## - listener.wss.my_ws_listener.allowed_protocol_versions
    ## - listener.ssl.my_ws_listener.allowed_protocol_versions
    ## Allowed values are 3 (MQTT 3.1), 4 (MQTT 3.1.1), 5 (MQTT 5.0), 131
    ## (MQTT 3.1 bridge), 132 (MQTT 3.1.1 bridge).
    ## 
    ## Default: 3,4,5,131
    ## 
    ## Acceptable values:
    ##   - text
    ## listener.tcp.allowed_protocol_versions = 3,4,5
    
    ## listener.vmq.clustering is the IP address and TCP port that
    ## the broker will bind to accept connections from other cluster
    ## nodes e.g:
    ## - listener.vmq.clustering = 0.0.0.0:18883
    ## This also works for SSL listeners:
    ## - listener.vmqs.clustering = 0.0.0.0:18884
    ## 
    ## Default: 0.0.0.0:44053
    ## 
    ## Acceptable values:
    ##   - an IP/port pair, e.g. 127.0.0.1:10011
    listener.vmq.clustering = 0.0.0.0:44053
    
    ## listener.http.default is the IP address and TCP port that
    ## the broker will bind to accept HTTP connections
    ## - listener.http.default = 0.0.0.0:8888
    ## This also works for SSL listeners:
    ## - listener.https.default= 0.0.0.0:8889
    ## 
    ## Default: 127.0.0.1:8888
    ## 
    ## Acceptable values:
    ##   - an IP/port pair, e.g. 127.0.0.1:10011
    listener.http.default = 127.0.0.1:8888
    
    ## The cafile is used to define the path to a file containing
    ## the PEM encoded CA certificates that are trusted. Set the cafile
    ## on the protocol level or on the listener level:
    ## - listener.ssl.cafile
    ## - listener.wss.cafile
    ## or on the listener level:
    ## - listener.ssl.my_ssl_listener.cafile
    ## - listener.wss.my_wss_listener.cafile
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - the path to a file
    ## listener.ssl.cafile = ./etc/cacerts.pem
    
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - the path to a file
    ## listener.https.cafile = ./etc/cacerts.pem
    
    ## Set the path to the PEM encoded server certificate
    ## on the protocol level or on the listener level:
    ## - listener.ssl.certfile
    ## - listener.wss.certfile
    ## or on the listener level:
    ## - listener.ssl.my_ssl_listener.certfile
    ## - listener.wss.my_wss_listener.certfile
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - the path to a file
    ## listener.ssl.certfile = ./etc/cert.pem
    
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - the path to a file
    ## listener.https.certfile = ./etc/cert.pem
    
    ## Set the path to the PEM encoded key file on the protocol
    ## level or on the listener level:
    ## - listener.ssl.keyfile
    ## - listener.wss.keyfile
    ## or on the listener level:
    ## - listener.ssl.my_ssl_listener.keyfile
    ## - listener.wss.my_wss_listener.keyfile
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - the path to a file
    ## listener.ssl.keyfile = ./etc/key.pem
    
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - the path to a file
    ## listener.vmqs.keyfile = ./etc/key.pem
    
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - the path to a file
    ## listener.https.keyfile = ./etc/key.pem
    
    ## Set the list of allowed ciphers (each separated with a colon,
    ## e.g. "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384"),
    ## on the protocol level or on the listener level. Reasonable defaults
    ## are used if nothing is specified:
    ## - listener.ssl.ciphers
    ## - listener.wss.ciphers
    ## or on the listener level:
    ## - listener.ssl.my_ssl_listener.ciphers
    ## - listener.wss.my_wss_listener.ciphers
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - text
    ## listener.ssl.ciphers = 
    
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - text
    ## listener.vmqs.ciphers = 
    
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - text
    ## listener.https.ciphers = 
    
    ## Set the list of allowed elliptical curves (each separated with a colon,
    ## e.g. "[sect571k1,secp521r1,brainpoolP512r1]"), on the protocol level or on the listener level.
    ## All known curves are used if nothing is specified.
    ## - listener.ssl.eccs
    ## - listener.wss.eccs
    ## or on the listener level:
    ## - listener.ssl.my_ssl_listener.eccs
    ## - listener.wss.my_wss_listener.eccs
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - text
    ## listener.ssl.eccs = [brainpoolP384r1, secp384r1, sect283k1]
    
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - text
    ## listener.vmqs.eccs = [brainpoolP384r1, secp384r1, sect283k1]
    
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - text
    ## listener.https.eccs = [brainpoolP384r1, secp384r1, sect283k1]
    
    ## If you have 'listener.ssl.require_certificate' set to true,
    ## you can create a certificate revocation list file to revoke access
    ## to particular client certificates. If you have done this, use crlfile
    ## to point to the PEM encoded revocation file. This can be done on the
    ## protocol level or on the listener level.
    ## - listener.ssl.crlfile
    ## - listener.wss.crlfile
    ## or on the listener level:
    ## - listener.ssl.my_ssl_listener.crlfile
    ## - listener.wss.my_wss_listener.crlfile
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - the path to a file
    ## listener.ssl.crlfile = 
    
    ## Enable this option if you want to use SSL client certificates
    ## to authenticate your clients. This can be done on the protocol level
    ## or on the listener level.
    ## - listener.ssl.require_certificate
    ## - listener.wss.require_certificate
    ## or on the listener level:
    ## - listener.ssl.my_ssl_listener.require_certificate
    ## - listener.wss.my_wss_listener.require_certificate
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    ## listener.ssl.require_certificate = off
    
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    ## listener.vmqs.require_certificate = off
    
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    ## listener.https.require_certificate = off
    
    ## Configure the TLS protocol version (tlsv1, tlsv1.1, tlsv1.2 or tlsv1.3) to be
    ## used for either all configured SSL listeners or for a specific listener:
    ## - listener.ssl.tls_version
    ## - listener.wss.tls_version
    ## or on the listener level:
    ## - listener.ssl.my_ssl_listener.tls_version
    ## - listener.wss.my_wss_listener.tls_version
    ## TLSv1.3 requires OTP 23 or later.
    ## 
    ## Default: tlsv1.2
    ## 
    ## Acceptable values:
    ##   - text
    ## listener.ssl.tls_version = tlsv1.2
    
    ## 
    ## Default: tlsv1.2
    ## 
    ## Acceptable values:
    ##   - text
    ## listener.vmqs.tls_version = tlsv1.2
    
    ## 
    ## Default: tlsv1.2
    ## 
    ## Acceptable values:
    ##   - text
    ## listener.https.tls_version = tlsv1.2
    
    ## If 'listener.ssl.require_certificate' is enabled, you may enable
    ## 'listener.ssl.use_identity_as_username' to use the CN value from the client
    ## certificate as a username. If enabled other authentication plugins are not
    ## considered. The option can be specified either for all SSL listeners or for
    ## a specific listener:
    ## - listener.ssl.use_identity_as_username
    ## - listener.wss.use_identity_as_username
    ## or on the listener level:
    ## - listener.ssl.my_ssl_listener.use_identity_as_username
    ## - listener.wss.my_wss_listener.use_identity_as_username
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    ## listener.ssl.use_identity_as_username = off
    
    ## If listener.ssl.pskfile is enabled VerneMQ supports TLS connection based on
    ## pre-shared keys (PSK).
    ## The option can be specified either for all SSL listeners or for
    ## a specific listener.
    ## - listener.ssl.psk_support
    ## or on the listener level:
    ## - listener.ssl.my_ssl_listener.psk_support
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    ## listener.ssl.psk_support = off
    
    ## The PSK hint sent by the server to the client.
    ## The option can be specified either for all SSL listeners or for
    ## a specific listener.
    ## - listener.ssl.psk_identity_hint
    ## or on the listener level:
    ## - listener.ssl.my_ssl_listener.psk_identity_hint
    ## 
    ## Default: VMQ_PSK
    ## 
    ## Acceptable values:
    ##   - text
    ## listener.ssl.psk_identity_hint = VMQ_PSK
    
    ## If PSK support is enabled, the pre-shared keys must be provided as key value pairs
    ## seperated by a seperator (by default ":"), e.g.
    ## mypskidentity:mypskkey
    ## The key is a string (not hex-encoded). The psk file is used for all listerners.
    ## 
    ## Default: ./etc/vmq.psk
    ## 
    ## Acceptable values:
    ##   - the path to a file
    listener.ssl.pskfile = ./etc/vmq.psk
    
    ## The pre-shared keys and the psk identity are separated by a separator.
    ## By default, a colon is used.
    ## 
    ## Default: :
    ## 
    ## Acceptable values:
    ##   - text
    ## listener.ssl.pskfile_separator = :
    
    ## Enable the $SYSTree Reporter.
    ## 
    ## Default: on
    ## 
    ## Acceptable values:
    ##   - on or off
    systree_enabled = on
    
    ## The integer number of milliseconds between updates of the $SYS subscription hierarchy,
    ## which provides status information about the broker. If unset, defaults to 20 seconds.
    ## Set to 0 to disable publishing the $SYS hierarchy completely.
    ## 
    ## Default: 20000
    ## 
    ## Acceptable values:
    ##   - an integer
    systree_interval = 20000
    
    ## Prometheus namespace prefix
    ## 
    ## Default: vernemq_
    ## 
    ## Acceptable values:
    ##   - text
    prometheus_namespace = vernemq_
    
    ## Enable the Graphite Reporter. Ensure to also configure a
    ## proper graphite.host
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    graphite_enabled = off
    
    ## the graphite server host name
    ## 
    ## Default: localhost
    ## 
    ## Acceptable values:
    ##   - text
    graphite_host = localhost
    
    ## the tcp port of the graphite server
    ## 
    ## Default: 2003
    ## 
    ## Acceptable values:
    ##   - an integer
    graphite_port = 2003
    
    ## the interval we push metrics to the graphite server in ms
    ## 
    ## Default: 20000
    ## 
    ## Acceptable values:
    ##   - an integer
    graphite_interval = 20000
    
    ## set the prefix that is applied to all metrics reported to graphite
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - text
    ## graphite_prefix = my-prefix
    
    ## the graphite server api key, e.g. used by hostedgraphite.com
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - text
    ## graphite_api_key = My-Api-Key
    
    ## Distribution policy for shared subscriptions. Default is
    ## 'prefer_local' which will ensure that local subscribers will be
    ## used if any are available. 'local_only' will select a random local
    ## subscriber if any are available. 'random' will randomly choose
    ## between all available subscribers.
    ## 
    ## Default: prefer_local
    ## 
    ## Acceptable values:
    ##   - text
    shared_subscription_policy = prefer_local
    
    ## plugins.<plugin> enables/disables a plugin.
    ## Plugin specific settings are set via the plugin itself, i.e., to
    ## set the 'file' setting for the myplugin plugin, add a line like:
    ## myplugin.file = /path/to/file
    ## 
    ## Acceptable values:
    ##   - on or off
    ## plugins.name = on
    
    ## plugins.<name>.path defines the location of the plugin
    ## associated with <name>. This is needed for plugins that are not
    ## shipped with VerneMQ.
    ## 
    ## Acceptable values:
    ##   - the path to a directory
    ## plugins.mypluginname.path = /path/to/myplugin
    
    ## plugins.<name>.priority defines the load order of the
    ## plugins. Plugins are loaded by priority. If no priority is given
    ## the load order is undefined. Prioritized plugins will always be
    ## loaded before plugins with no defined priority.
    ## 
    ## Acceptable values:
    ##   - an integer
    ## plugins.mypluginname.priority = 5
    
    ## File based authentication plugin.
    ## 
    ## Default: on
    ## 
    ## Acceptable values:
    ##   - on or off
    plugins.vmq_passwd = on
    
    ## File based authorization plugin.
    ## 
    ## Default: on
    ## 
    ## Acceptable values:
    ##   - on or off
    plugins.vmq_acl = on
    
    ## Lua based plugins.
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    plugins.vmq_diversity = off
    
    ## Webhook based plugins.
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    plugins.vmq_webhooks = off
    
    ## The VerneMQ bridge plugin.
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    plugins.vmq_bridge = off
    
    ## Limits the maximum topic depth
    ## 
    ## Default: 10
    ## 
    ## Acceptable values:
    ##   - an integer
    topic_max_depth = 10
    
    ## Specifies the metadata plugin that is used for storing and replicating
    ## VerneMQ metadata objects such as MQTT subscriptions and retained messages.
    ## The default is kept at `vmq_plumtree` for compatibility with existing deployments.
    ## For new cluster deployments, the recommendation is to use 'vmq_swc' from the
    ## beginning. Note that the 2 protocols are not compatible, so clusters can't be
    ## mixed.
    ## 
    ## Default: vmq_swc
    ## 
    ## Acceptable values:
    ##   - one of: vmq_plumtree, vmq_swc
    metadata_plugin = vmq_swc
    
    ## Set the path to an access control list file.
    ## 
    ## Default: ./etc/vmq.acl
    ## 
    ## Acceptable values:
    ##   - the path to a file
    vmq_acl.acl_file = ./etc/vmq.acl
    
    ## set the acl reload interval in seconds, the value 0 disables
    ## the automatic reloading of the acl file.
    ## 
    ## Default: 10
    ## 
    ## Acceptable values:
    ##   - an integer
    vmq_acl.acl_reload_interval = 10
    
    ## Set the path to a password file.
    ## 
    ## Default: ./etc/vmq.passwd
    ## 
    ## Acceptable values:
    ##   - the path to a file
    vmq_passwd.password_file = ./etc/vmq.passwd
    
    ## set the password reload interval in seconds, the value 0
    ## disables the automatic reloading of the password file.
    ## 
    ## Default: 10
    ## 
    ## Acceptable values:
    ##   - an integer
    vmq_passwd.password_reload_interval = 10
    
    ## Configure the vmq_diversity plugin script dir. The script dir
    ## is searched for Lua scripts which are automatically loaded when the
    ## plugin is enabled.
    ## 
    ## Default: ./share/lua
    ## 
    ## Acceptable values:
    ##   - the path to a directory
    vmq_diversity.script_dir = ./share/lua
    
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    vmq_diversity.auth_postgres.enabled = off
    
    ## 
    ## Default: localhost
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.postgres.host = localhost
    
    ## 
    ## Default: 5432
    ## 
    ## Acceptable values:
    ##   - an integer
    ## vmq_diversity.postgres.port = 5432
    
    ## 
    ## Default: root
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.postgres.user = root
    
    ## 
    ## Default: password
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.postgres.password = password
    
    ## 
    ## Default: vernemq_db
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.postgres.database = vernemq_db
    
    ## Specify if the postgresql driver should use TLS or not.
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    vmq_diversity.postgres.ssl = off
    
    ## The cafile is used to define the path to a file containing
    ## the PEM encoded CA certificates that are trusted.
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - the path to a file
    ## vmq_diversity.postgres.cafile = ./etc/cafile.pem
    
    ## Set the path to the PEM encoded server certificate.
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - the path to a file
    ## vmq_diversity.postgres.certfile = ./etc/cert.pem
    
    ## Set the path to the PEM encoded key file.
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - the path to a file
    ## vmq_diversity.postgres.keyfile = ./etc/keyfile.pem
    
    ## Allow the plugin to open SSL connections to remote DB with wildcard certs
    ## 
    ## Default: https
    ## 
    ## Acceptable values:
    ##   - one of: https
    ## vmq_diversity.postgres.ssl.customize_hostname_check = on
    
    ## Whether the client verifies the server cert or not.
    ## Use "verify_peer" in production.
    ## 
    ## Default: verify_peer
    ## 
    ## Acceptable values:
    ##   - one of: verify_none, verify_peer
    vmq_diversity.postgres.ssl.verify = verify_peer
    
    ## Whether to use the System CAs (public_key:cacerts_get/0).
    ## Can be used as an alternative to provide a CAcertfile
    ## 
    ## Default: on
    ## 
    ## Acceptable values:
    ##   - on or off
    vmq_diversity.postgres.ssl.use_system_cas = on
    
    ## The password hashing method to use in PostgreSQL:
    ## 
    ## Default: crypt
    ## 
    ## Acceptable values:
    ##   - one of: crypt, bcrypt
    vmq_diversity.postgres.password_hash_method = crypt
    
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    vmq_diversity.auth_cockroachdb.enabled = off
    
    ## 
    ## Default: localhost
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.cockroachdb.host = localhost
    
    ## 
    ## Default: 5432
    ## 
    ## Acceptable values:
    ##   - an integer
    ## vmq_diversity.cockroachdb.port = 5432
    
    ## 
    ## Default: root
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.cockroachdb.user = root
    
    ## 
    ## Default: password
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.cockroachdb.password = password
    
    ## 
    ## Default: vernemq_db
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.cockroachdb.database = vernemq_db
    
    ## Specify if the cockroachdb driver should use TLS or not.
    ## 
    ## Default: on
    ## 
    ## Acceptable values:
    ##   - on or off
    vmq_diversity.cockroachdb.ssl = on
    
    ## The cafile is used to define the path to a file containing
    ## the PEM encoded CA certificates that are trusted.
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - the path to a file
    ## vmq_diversity.cockroachdb.cafile = ./etc/cafile.pem
    
    ## Set the path to the PEM encoded server certificate.
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - the path to a file
    ## vmq_diversity.cockroachdb.certfile = ./etc/cert.pem
    
    ## Set the path to the PEM encoded key file.
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - the path to a file
    ## vmq_diversity.cockroachdb.keyfile = ./etc/keyfile.pem
    
    ## Allow the plugin to open SSL connections to remote DB with wildcard certs
    ## 
    ## Default: https
    ## 
    ## Acceptable values:
    ##   - one of: https
    ## vmq_diversity.cockroachdb.ssl.customize_hostname_check = on
    
    ## Whether the client verifies the server cert or not.
    ## Use "verify_peer" in production.
    ## 
    ## Default: verify_peer
    ## 
    ## Acceptable values:
    ##   - one of: verify_none, verify_peer
    vmq_diversity.cockroachdb.ssl.verify = verify_peer
    
    ## Whether to use the System CAs (public_key:cacerts_get/0).
    ## Can be used as an alternative to provide a CAcertfile
    ## 
    ## Default: on
    ## 
    ## Acceptable values:
    ##   - on or off
    vmq_diversity.cockroachdb.ssl.use_system_cas = on
    
    ## The password hashing method to use in CockroachDB:
    ## 
    ## Default: bcrypt
    ## 
    ## Acceptable values:
    ##   - one of: sha256, bcrypt
    vmq_diversity.cockroachdb.password_hash_method = bcrypt
    
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    vmq_diversity.auth_mysql.enabled = off
    
    ## 
    ## Default: localhost
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.mysql.host = localhost
    
    ## 
    ## Default: 3306
    ## 
    ## Acceptable values:
    ##   - an integer
    ## vmq_diversity.mysql.port = 3306
    
    ## 
    ## Default: root
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.mysql.user = root
    
    ## 
    ## Default: password
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.mysql.password = password
    
    ## 
    ## Default: vernemq_db
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.mysql.database = vernemq_db
    
    ## The password hashing method to use in MySQL:
    ## password: Default for compatibility, deprecated since MySQL 5.7.6 and not
    ## usable with MySQL 8.0.11+.
    ## Docs: https://dev.mysql.com/doc/refman/5.7/en/encryption-functions.html#function_password
    ## md5: Calculates an MD5 128-bit checksum of the password.
    ## Docs: https://dev.mysql.com/doc/refman/8.0/en/encryption-functions.html#function_md5
    ## sha1: Calculates the SHA-1 160-bit checksum for the password.
    ## Docs: https://dev.mysql.com/doc/refman/8.0/en/encryption-functions.html#function_sha1
    ## sha256: Calculates the SHA-2 hash of the password, using 256 bits.
    ## Works only if MySQL has been configured with SSL support.
    ## Docs: https://dev.mysql.com/doc/refman/8.0/en/encryption-functions.html#function_sha2
    ## 
    ## Default: password
    ## 
    ## Acceptable values:
    ##   - one of: password, md5, sha1, sha256
    vmq_diversity.mysql.password_hash_method = password
    
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    vmq_diversity.auth_mongodb.enabled = off
    
    ## 
    ## Default: localhost
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.mongodb.host = localhost
    
    ## 
    ## Default: 27017
    ## 
    ## Acceptable values:
    ##   - an integer
    ## vmq_diversity.mongodb.port = 27017
    
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.mongodb.login = 
    
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.mongodb.password = 
    
    ## 
    ## Default: admin
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.mongodb.auth_source = 
    
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.mongodb.database = 
    
    ## Specify if the mongodb driver should use TLS or not.
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    vmq_diversity.mongodb.ssl = off
    
    ## The cafile is used to define the path to a file containing
    ## the PEM encoded CA certificates that are trusted.
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - the path to a file
    ## vmq_diversity.mongodb.cafile = ./etc/cafile.pem
    
    ## Set the path to the PEM encoded server certificate.
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - the path to a file
    ## vmq_diversity.mongodb.certfile = ./etc/cert.pem
    
    ## Set the path to the PEM encoded key file.
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - the path to a file
    ## vmq_diversity.mongodb.keyfile = ./etc/keyfile.pem
    
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    vmq_diversity.auth_redis.enabled = off
    
    ## 
    ## Default: localhost
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.redis.host = localhost
    
    ## 
    ## Default: 6379
    ## 
    ## Acceptable values:
    ##   - an integer
    ## vmq_diversity.redis.port = 6379
    
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.redis.password = 
    
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.redis.user = 
    
    ## 
    ## Default: 0
    ## 
    ## Acceptable values:
    ##   - an integer
    ## vmq_diversity.redis.database = 0
    
    ## 
    ## Default: localhost
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_diversity.memcache.host = localhost
    
    ## 
    ## Default: 11211
    ## 
    ## Acceptable values:
    ##   - an integer
    ## vmq_diversity.memcache.port = 11211
    
    ## vmq_diversity.<name>.file = <file> loads a specific lua
    ## script when `vmq_diversity` starts. The scripts are loaded in the
    ## order defined by the names given, i.e., the script with <name>
    ## 'script1' is started before the plugin with <name> 'script2'.
    ## Scripts loaded like this are loaded after the scripts in the
    ## default script dir.
    ## 
    ## Acceptable values:
    ##   - the path to a file
    ## vmq_diversity.script1.file = path/to/my/script.lua
    
    ## The pool_size specifies how many bcrypt port operations are
    ## allowed concurrently. The value `auto` will try to detect all
    ## logical cpus and set the pool size to that number minus 1.
    ## If the number of logical cpus cannot be detected, a value of 1 is used.
    ## 
    ## Default: 1
    ## 
    ## Acceptable values:
    ##   - an integer
    ##   - one of: auto
    vmq_bcrypt.pool_size = 1
    
    ## The pool_size specifies how many bcrypt NIF operations are
    ## allowed concurrently. The value `auto` will try to detect all
    ## logical cpus and set the pool size to that number minus 1.
    ## If the number of logical cpus cannot be detected, a value of 1 is used.
    ## 
    ## Default: 4
    ## 
    ## Acceptable values:
    ##   - an integer
    ##   - one of: auto
    vmq_bcrypt.nif_pool_size = 4
    
    ## Specifies the max workers to overflow of the bcrypt NIF program pool.
    ## 
    ## Default: 10
    ## 
    ## Acceptable values:
    ##   - an integer
    vmq_bcrypt.nif_pool_max_overflow = 10
    
    ## Specifies the number of bcrypt log rounds, defining the hashing complexity.
    ## 
    ## Default: 12
    ## 
    ## Acceptable values:
    ##   - an integer
    vmq_bcrypt.default_log_rounds = 12
    
    ## Specify where bcrypt is called as an Erlang port or NIF
    ## 
    ## Default: port
    ## 
    ## Acceptable values:
    ##   - one of: nif, port
    vmq_bcrypt.mechanism = port
    
    ## To configure and register a webhook a hook and an endpoint
    ## need to be configured and this is achieved by associating both with
    ## a name. vmq_webhooks.<name>.hook = <hook> associates the hook
    ## <hook> with the name <name>. Webhooks are registered in the order
    ## of the name given to it. Therefore a webhook with name 'webhook1'
    ## is registered before a webhook with the name 'webhook2'.
    ## 
    ## Acceptable values:
    ##   - one of: auth_on_register, auth_on_publish, auth_on_subscribe, on_register, on_publish, on_subscribe, on_unsubscribe, on_deliver, on_offline_message, on_client_wakeup, on_client_offline, on_client_gone, on_session_expired, auth_on_register_m5, auth_on_publish_m5, auth_on_subscribe_m5, on_register_m5, on_publish_m5, on_subscribe_m5, on_unsubscribe_m5, on_deliver_m5, on_auth_m5
    ## vmq_webhooks.webhook1.hook = auth_on_register
    
    ## Associate an endpoint with a name.
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_webhooks.webhook1.endpoint = http://localhost/myendpoints
    
    ## Configure TLS version for HTTPS webhook calls
    ## HTTPS webhooks.
    ## 
    ## Default: tlsv1.2
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_webhooks.tls_version = tlsv1.2
    
    ## Specify the address and port of the bridge to connect to. Several
    ## bridges can configured by using different bridge names (e.g. br0). If the
    ## connection supports SSL encryption bridge.ssl.<name> can be used.
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_bridge.tcp.br0 = 127.0.0.1:1889
    
    ## Set the clean session option for the bridge. By default this is disabled,
    ## which means that all subscriptions on the remote broker are kept in case of
    ## the network connection dropping. If enabled, all subscriptions and messages
    ## on the remote broker will be cleaned up if the connection drops.
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    ## vmq_bridge.tcp.br0.cleansession = off
    
    ## Set the client id for this bridge connection. If not defined, this
    ## defaults to 'name.hostname', where name is the connection name and hostname
    ## is the hostname of this computer.
    ## 
    ## Default: auto
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_bridge.tcp.br0.client_id = auto
    
    ## Set the number of seconds after which the bridge should send a ping if
    ## no other traffic has occurred.
    ## 
    ## Default: 60
    ## 
    ## Acceptable values:
    ##   - an integer
    ## vmq_bridge.tcp.br0.keepalive_interval = 60
    
    ## Set the persistent queue option for this bridge. By default this is off,
    ## meaning the queued messages are stored in memory only. If set to on,
    ## queued messages will be stored persistently on disk. If you're using
    ## persistency, make sure to configure the queue_dir option.
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    ## vmq_bridge.tcp.br0.persistent_queue = off
    
    ## Set the directory where queued messages for this bridge will be stored, if persistent_queue is set to on.
    ## It is recommended to set this to a unique directory. If you're using multiple persistent bridge instances,
    ## then omitting this option can lead to serious problems.
    ## 
    ## Acceptable values:
    ##   - the path to a directory
    ## vmq_bridge.tcp.br0.queue_dir = /qdata/br0
    
    ## Configure a username for the bridge. This is used for authentication
    ## purposes when connecting to a broker that support MQTT v3.1 and requires a
    ## username and/or password to connect. See also the password option.
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_bridge.tcp.br0.username = my_remote_user
    
    ## Configure a password for the bridge. This is used for authentication
    ## purposes when connecting to a broker that support MQTT v3.1 and requires a
    ## username and/or password to connect. This option is only valid if a username
    ## is also supplied.
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_bridge.tcp.br0.password = my_remote_password
    
    ## Configure segment size (in Bytes) for the persistent queue.
    ## 
    ## Default: 4KB
    ## 
    ## Acceptable values:
    ##   - a byte size with units, e.g. 10GB
    ## vmq_bridge.tcp.br0.segment_size = 4KB
    
    ## Number of messages that will be read from Queue and published at once.
    ## Only after the entire batch has been completed (e.g. all messages were pubacked if QoS 1),
    ## will the next batch be read. Set this to a lower setting if  you encounter bandwidth problems.
    ## 
    ## Default: 100
    ## 
    ## Acceptable values:
    ##   - an integer
    ## vmq_bridge.tcp.br0.outgoing_batch_size = 100
    
    ## Define one or more topic pattern to be shared between the two brokers.
    ## Any topics matching the pattern (including wildcards) are shared.
    ## The following format is used:
    ## pattern [[[ out | in | both ] qos-level] local-prefix remote-prefix]
    ## [ out | in | both ]: specifies that this bridge exports messages (out), imports
    ## messages (in) or shared in both directions (both). If undefined we default to
    ## export (out).
    ## qos-level: specifies the publish/subscribe QoS level used for this
    ## toppic. If undefined we default to QoS 0.
    ## local-prefix and remote-prefix: For incoming topics, the bridge
    ## will prepend the pattern with the remote prefix and subscribe to
    ## the resulting topic on the remote broker.  When a matching
    ## incoming message is received, the remote prefix will be removed
    ## from the topic and then the local prefix added.
    ## For outgoing topics, the bridge will prepend the pattern with the
    ## local prefix and subscribe to the resulting topic on the local
    ## broker. When an outgoing message is processed, the local prefix
    ## will be removed from the topic then the remote prefix added.
    ## For shared subscriptions topic prefixes are applied only to the
    ## topic part of the subscription.
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_bridge.tcp.br0.topic.1 = topic
    
    ## Set the amount of time a bridge using the automatic start type will wait
    ## until attempting to reconnect. Defaults to 30 seconds.
    ## 
    ## Default: 10
    ## 
    ## Acceptable values:
    ##   - an integer
    ## vmq_bridge.tcp.br0.restart_timeout = 10
    
    ## If try_private is enabled, the bridge will attempt to indicate to the
    ## remote broker that it is a bridge not an ordinary client.
    ## Note that loop detection for bridges is not yet implemented.
    ## 
    ## Default: on
    ## 
    ## Acceptable values:
    ##   - on or off
    ## vmq_bridge.tcp.br0.try_private = on
    
    ## Set the MQTT protocol version to be used by the bridge.
    ## 
    ## Default: 3
    ## 
    ## Acceptable values:
    ##   - one of: 3, 4
    ## vmq_bridge.tcp.br0.mqtt_version = on
    
    ## Maximum number of outgoing messages the bridge will buffer
    ## while not connected to the remote broker. Messages published while
    ## the buffer is full are dropped. A value of 0 means buffering is
    ## disabled.
    ## 
    ## Default: 0
    ## 
    ## Acceptable values:
    ##   - an integer
    ## vmq_bridge.tcp.br0.max_outgoing_buffered_messages = 0
    
    ## Percentage of max_outgoing_buffered_messages that will be used for the pubrel queue.
    ## The pubrel queue is only relevant, when bridging topics with QoS 2. In that case it is highly recommended,
    ## to set this option. Allowed values: 0-100, e.g. value of 10 means 10%.
    ## If this is set to 0, the pubrel queue will be disabled (default).
    ## Caution: the segment_size should not be bigger than max_outgoing_buffered_messages * (pubrel_queue_ratio/100).
    ## 
    ## Default: 0%
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_bridge.tcp.br0.pubrel_queue_ratio = 10%
    
    ## The cafile is used to define the path to a file containing
    ## the PEM encoded CA certificates that are trusted.
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - the path to a file
    ## vmq_bridge.ssl.sbr0.cafile = ./etc/cacerts.pem
    
    ## Set the path to the PEM encoded server certificate.
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - the path to a file
    ## vmq_bridge.ssl.sbr0.certfile = ./etc/cert.pem
    
    ## Set the path to the PEM encoded key file.
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - the path to a file
    ## vmq_bridge.ssl.sbr0.keyfile = ./etc/key.pem
    
    ## When using certificate based TLS, the bridge will attempt to verify the
    ## hostname provided in the remote certificate matches the host/address being
    ## connected to. This may cause problems in testing scenarios, so this option
    ## may be enabled to disable the hostname verification.
    ## Setting this option to true means that a malicious third party could
    ## potentially inpersonate your server, so it should always be disabled in
    ## production environments.
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    ## vmq_bridge.ssl.sbr0.insecure = off
    
    ## Configure the TLS protocol version (tlsv1, tlsv1.1, or tlsv1.2) to be
    ## used for this bridge.
    ## 
    ## Default: tlsv1.2
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_bridge.ssl.sbr0.tls_version = tlsv1.2
    
    ## Pre-shared-key encryption provides an alternative to certificate based
    ## encryption. This option specifies the identity used.
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_bridge.ssl.sbr0.identity = 
    
    ## Pre-shared-key encryption provides an alternative to certificate based
    ## encryption. This option specifies the shared secret used in hexadecimal
    ## format without leading '0x'.
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_bridge.ssl.sbr0.psk = 
    
    ## Allow the bridge to open SSL connections to remote broker with wildcard certs
    ## 
    ## Default: https
    ## 
    ## Acceptable values:
    ##   - one of: https
    ## vmq_bridge.ssl.name.customize_hostname_check = on
    
    ## Specifies the auth method used after the API call has been authenticated
    ## /authorized by the endpoint. The only possible values is "on-behalf-of" and
    ## "predefined"
    ## If you specify "on-behalf-of" every HTTP call needs to provide a client-id,
    ## a username and a password. Those are only used to authenticated/authorize
    ## the request. They have no impact on any client with the same id already
    ## connected. You should anyway consider using  dedicated users and clients for
    ## HTTP calls.
    ## It is possible to set the app_auth setting on each listener.
    ## listener.https.$name$.http_modules.vmq_http_pub.app_auth
    ## listener.http.$name$.http_modules.vmq_http_pub.app_auth
    ## Using HTTP is not recommended.
    ## 
    ## Default: on-behalf-of
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_http_pub.mqtt_auth.mode = on-behalf-of
    
    ## 
    ## Default: mqtt
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_http_pub.mqtt_auth.auth_plugin = mqtt
    
    ## 
    ## Default: password
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_web_ui.admin.auth = password
    
    ## 
    ## Default: admin
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_web_ui.admin.user_name = admin
    
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_web_ui.admin.user_pwd = 
    
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_web_ui.mgmt_api.key = 
    
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_web_ui.mgmt_api.port = 
    
    ## 
    ## Default: 
    ## 
    ## Acceptable values:
    ##   - text
    ## vmq_web_ui.mgmt_api.scheme = 
    
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    ## vmq_web_ui.file_access.allow_read = off
    
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    ## vmq_web_ui.file_access.allow_write = off
    
    ## Where to emit the default log messages (typically at 'info'
    ## severity):
    ## off: disabled
    ## file: the file specified by log.console.file
    ## console: to standard output (seen when using `vmq attach-direct`)
    ## both: log.console.file and standard out.
    ## 
    ## Default: file
    ## 
    ## Acceptable values:
    ##   - one of: off, file, console, both
    log.console = file
    
    ## The severity level of the console log, default is 'info'.
    ## 
    ## Default: info
    ## 
    ## Acceptable values:
    ##   - one of: debug, info, notice, warning, error, critical, alert, emergency
    log.console.level = info
    
    ## When 'log.console' is set to 'file' or 'both', the file where
    ## console messages will be logged.
    ## 
    ## Default: ./log/console.log
    ## 
    ## Acceptable values:
    ##   - the path to a file
    log.console.file = ./log/console.log
    
    ## Logger format for console logging to standard output: text or json. Default text)
    ## 
    ## Default: text
    ## 
    ## Acceptable values:
    ##   - one of: text, json
    log.console.console.format = text
    
    ## Logger format for console logging to file: text or json. Default text)
    ## 
    ## Default: text
    ## 
    ## Acceptable values:
    ##   - one of: text, json
    log.console.file.format = text
    
    ## Maximum size of the console log in bytes, before it is rotated
    ## 
    ## Default: infinity
    ## 
    ## Acceptable values:
    ##   - a byte size with units, e.g. 10GB
    ##   - the text "infinity"
    log.console.rotation.size = infinity
    
    ## The number of rotated console logs to keep. When set to
    ## '0', only the current open log file is kept. This setting is only
    ## considered if log.error.rotation.size is different than "infinity".
    ## 
    ## Default: 5
    ## 
    ## Acceptable values:
    ##   - an integer
    log.console.rotation.keep = 5
    
    ## Should rotated console log file archives be compressed (default off)
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    log.console.rotation.compress_on_rotate = off
    
    ## Disables are enables the dedicated error logger
    ## 
    ## Default: on
    ## 
    ## Acceptable values:
    ##   - on or off
    log.error = on
    
    ## The file where error messages will be logged.
    ## 
    ## Default: ./log/error.log
    ## 
    ## Acceptable values:
    ##   - the path to a file
    log.error.file = ./log/error.log
    
    ## Logger format: text or json. Default text)
    ## 
    ## Default: text
    ## 
    ## Acceptable values:
    ##   - one of: text, json
    log.error.file.format = text
    
    ## Maximum size of the error log in bytes, before it is rotated
    ## 
    ## Default: infinity
    ## 
    ## Acceptable values:
    ##   - a byte size with units, e.g. 10GB
    ##   - the text "infinity"
    log.error.rotation.size = infinity
    
    ## The number of rotated error logs to keep. When set to
    ## '0', only the current open log file is kept. This setting is only
    ## considered if log.error.rotation.size is different than "infinity".
    ## 
    ## Default: 5
    ## 
    ## Acceptable values:
    ##   - an integer
    log.error.rotation.keep = 5
    
    ## Should rotated log file archives be compressed (default off)
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    log.error.rotation.compress_on_rotate = off
    
    ## Disables are enables the dedicated crash logger. Crash logs are also written
    ## to the error log, so this logger is disabled by default.
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    log.crash = off
    
    ## The file where crash messages will be logged.
    ## 
    ## Default: ./log/crash.log
    ## 
    ## Acceptable values:
    ##   - the path to a file
    log.crash.file = ./log/crash.log
    
    ## Maximum size of the crash log in bytes, before it is rotated
    ## 
    ## Default: infinity
    ## 
    ## Acceptable values:
    ##   - a byte size with units, e.g. 10GB
    ##   - the text "infinity"
    log.crash.rotation.size = infinity
    
    ## The number of rotated crash logs to keep. When set to
    ## '0', only the current open log file is kept.
    ## 
    ## Default: 5
    ## 
    ## Acceptable values:
    ##   - an integer
    log.crash.rotation.keep = 5
    
    ## Should rotated log file archives be compressed (default off)
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    log.crash.rotation.compress_on_rotate = off
    
    ## Disables are enables the dedicated sasl logger. Crash logs are also written
    ## to the error log, so this logger is disabled by default.
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    log.sasl = off
    
    ## The file where sasl messages will be logged.
    ## 
    ## Default: ./log/sasl.log
    ## 
    ## Acceptable values:
    ##   - the path to a file
    log.sasl.file = ./log/sasl.log
    
    ## Maximum size of the crash log in bytes, before it is rotated
    ## 
    ## Default: infinity
    ## 
    ## Acceptable values:
    ##   - a byte size with units, e.g. 10GB
    ##   - the text "infinity"
    log.sasl.rotation.size = infinity
    
    ## The number of rotated crash logs to keep. When set to
    ## '0', only the current open log file is kept.
    ## 
    ## Default: 5
    ## 
    ## Acceptable values:
    ##   - an integer
    log.sasl.rotation.keep = 5
    
    ## Should rotated log file archives be compressed (default off)
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    log.sasl.rotation.compress_on_rotate = off
    
    ## When set to 'on', enables log output to syslog.
    ## 
    ## Default: off
    ## 
    ## Acceptable values:
    ##   - on or off
    log.syslog = off
    
    ## Name of the Erlang node
    ## Default: [email protected]
    ## Acceptable values:
    ## - text
    ## 
    ## Default: [email protected]
    ## 
    ## Acceptable values:
    ##   - text
    nodename = [email protected]
    
    ## Cookie for distributed node communication.  All nodes in the
    ## same cluster should use the same cookie or they will not be able to
    ## communicate.
    ## IMPORTANT!!! SET the cookie to a private value! DO NOT LEAVE AT DEFAULT!
    ## 
    ## Default: vmq
    ## 
    ## Acceptable values:
    ##   - text
    distributed_cookie = vmq
    
    ## Sets the number of threads in async thread pool, valid range
    ## is 0-1024. If thread support is available, the default is 64.
    ## More information at: http://erlang.org/doc/man/erl.html
    ## 
    ## Default: 64
    ## 
    ## Acceptable values:
    ##   - an integer
    erlang.async_threads = 64
    
    ## The number of concurrent ports/sockets
    ## Valid range is 1024-134217727
    ## 
    ## Default: 262144
    ## 
    ## Acceptable values:
    ##   - an integer
    erlang.max_ports = 262144
    
    ## Set scheduler forced wakeup interval. All run queues will be
    ## scanned each Interval milliseconds. While there are sleeping
    ## schedulers in the system, one scheduler will be woken for each
    ## non-empty run queue found. An Interval of zero disables this
    ## feature, which also is the default.
    ## This feature is a workaround for lengthy executing native code, and
    ## native code that do not bump reductions properly.
    ## More information: http://www.erlang.org/doc/man/erl.html#+sfwi
    ## 
    ## Acceptable values:
    ##   - an integer
    ## erlang.schedulers.force_wakeup_interval = 500
    
    ## Enable or disable scheduler compaction of load. By default
    ## scheduler compaction of load is enabled. When enabled, load
    ## balancing will strive for a load distribution which causes as many
    ## scheduler threads as possible to be fully loaded (i.e., not run out
    ## of work). This is accomplished by migrating load (e.g. runnable
    ## processes) into a smaller set of schedulers when schedulers
    ## frequently run out of work. When disabled, the frequency with which
    ## schedulers run out of work will not be taken into account by the
    ## load balancing logic.
    ## More information: http://www.erlang.org/doc/man/erl.html#+scl
    ## 
    ## Acceptable values:
    ##   - one of: true, false
    ## erlang.schedulers.compaction_of_load = false
    
    ## Enable or disable scheduler utilization balancing of load. By
    ## default scheduler utilization balancing is disabled and instead
    ## scheduler compaction of load is enabled which will strive for a
    ## load distribution which causes as many scheduler threads as
    ## possible to be fully loaded (i.e., not run out of work). When
    ## scheduler utilization balancing is enabled the system will instead
    ## try to balance scheduler utilization between schedulers. That is,
    ## strive for equal scheduler utilization on all schedulers.
    ## More information: http://www.erlang.org/doc/man/erl.html#+sub
    ## 
    ## Acceptable values:
    ##   - one of: true, false
    ## erlang.schedulers.utilization_balancing = true
    
    ## This parameter defines the percentage of total server memory
    ## to assign to LevelDB. LevelDB will dynamically adjust its internal
    ## cache sizes to stay within this size.  The memory size can
    ## alternately be assigned as a byte count via leveldb.maximum_memory
    ## instead.
    ## 
    ## Default: 70
    ## 
    ## Acceptable values:
    ##   - an integer
    leveldb.maximum_memory.percent = 70
    
    ## Enables or disables the compression of data on disk.
    ## Enabling (default) saves disk space.  Disabling may reduce read
    ## latency but increase overall disk activity.  Option can be
    ## changed at any time, but will not impact data on disk until
    ## next time a file requires compaction.
    ## 
    ## Default: on
    ## 
    ## Acceptable values:
    ##   - on or off
    leveldb.compression = on
    
    ## Selection of compression algorithms.  snappy is
    ## original compression supplied for leveldb.  lz4 is new
    ## algorithm that compresses to similar volume but averages twice
    ## as fast on writes and four times as fast on reads.
    ## 
    ## Acceptable values:
    ##   - one of: snappy, lz4
    leveldb.compression.algorithm = lz4