The encoding, database and the dog

Zabbix provides a network map feature, where images may represent triggers, hosts, host groups etc. They may show current status, problem count and lots of other data. There are some default images that may be used in the maps, and users may also upload their own. This uploading has been a problem in the past.

These images are stored in the database. The problem is, Zabbix supports 5 different databases, and they store binary data in different formats. And that resulted in users having different problems when database encoding was incorrect. Of course, maintaining data in multiple different formats is also not that easy, and updating icon set is harder than it should be.

Re-base the images

Thus the idea is to unify image storing and keep them in base64 for all databases. While that would increase the amount of space images take, it would solve lots of problems and make life easier for Zabbix users. The problem is upgrading.

Users would want to keep their existing images, probably. So upgrade should convert them to base64 (keeping support for old format would be duplicate work, and there would be no way to phase this support out later anyway. Doing this for all supported databases does not seem to be possible at this time with the upgrade process Zabbix currently uses – Zabbix database patches for going to the next major version are in SQL. Involving a separate process outside the database at this time is not considered, and storing images in the filesystem wouldn’t work because of the distributed monitoring.

So the question to the Zabbix community would be – how would you handle this problem?

And, by the way, we are looking for great PHP programmers…

Notify of
Newest Most Voted
Inline Feedbacks
View all comments
Ricardo Santos
12 years ago

* Suggestion 1 – Create a button to “convert” old-style to new-style in frontend

This create a new problem that zabbix doesn’t know that images are converted or not. To solve this, a “flag” should be added to “image data”. It’s very similiar to suggestion 2.

* Suggestion 2 – Image Header

Create a new-style “image data” with “image header”, something like “base64:AbCdE==”
So we could support another images types like “external images” eg: “url:
Old style could be maintained or not

“image header” idea is based on this concept:

Robert Markula
Robert Markula
12 years ago

Um, sorry for the dumb question, but do images have to reside in the database at all?

What about a dedicated image directory somewhere in the zabbix web root? All images in this folder would automatically be visible on the web frontend. No fuss with different databases, image encodings and stuff. Once implemented it would be much more intuitive to use, both for the zabbix devs and the zabbix users.

Regarding speed regressions using an image directory, some sort of caching could be introduced.

Ricardo Santos
12 years ago
Reply to  Richlv

If we allow the Zabbix frontend could create/modify files on the webserver, would become a breach of security.

12 years ago

First let me say, I’m not a fan of Base64-ing images. I’ve recently done a project which had something like this in place and it was messy. Also, it will greatly increase the size of images in your database, and the time it takes to consume them (decoding them). If the front-ends are going to be remote from your database engine, this will increase the amount of time necessary for retrieving those images.

Lets simplify… your problem is encoding binary image data in a database, so lets remove binary image data from the database. You mentioned that you have explored this idea, but I don’t know if you’ve explored all avenues of this. With the proposed problem above, here’s what I would do…

First, have all built-in Zabbix images moved to part of the front-end codebase, and any user-provided images stored in a “engine” (abstract on purpose) that all front-ends can have access to. Because, the simple problem we’re solving here is image distribution, so lets distribute it!

Some of the “engines” (see above) that can store these images are a local folder (perfect for single-front-end installs), S3, other CDNs, iSCSI/NFS (via a local file), other network-based storage engine via local-file (via FUSE), a network-based persistent key-val database engine (membase, Google BigData, Cassandra, Amazon SimpleDB) or even something simple like FTP. Then every front-end can have access to these files regardless of their location/connection/etc.

Then the migration process would be to ask the user what method of storage you want to use, and any options necessary to use that option. Then iterate through your image table and migrate all images to that storage engine, and create a new image_paths table which just stores the reference to the image instead of binary data. Their paths would look something like…

cassandra://mycassandraserver: 8080/image1.jpg

I’m guessing (am I wrong?) that a _large_ number of your users are using only a single front end. I personally have about 5 single front-ends setups for my clients and employers, one is what I would consider large (+50 servers) . For these customers, this process would be especially simple and painless for them, the migration assistant can auto-suggest a folder in the existing web directory, so the user just hits “next” and they are done. And for the more enterprise-like setups with numerous front-ends and proxies they would have the technical abilities to do some of the more difficult migrations to other engines.

Just my 2c. 😉 My ideas above sound like fun, wish I wasn’t employed and had free time to help out! (Though I wouldn’t want to help out if you go the base64 route, lol, been there, done that, not a fan).


12 years ago
Reply to  Farley

And just thought of osmething… in a multi-front-end setup, you could also designate one of the front-ends the “image master” and only on that front-end can you upload/update images. The other front-ends get configured with the URL to your “image master” server, so they can retrieve images from it. But only your “image master” can write images to itself locally. And then if you wanted to iterate on that to improve it, write an upload-pass-thru engine to basically accept image uploads from other front-ends. Then your “image slaves” basically pass-thru image uploads directly to the “image master”, and respond to the user with what the image master would respond with. I mean, you have this fancy new API in Zabbix that I think could facilitate this front-end-to-front-end like traffic now easily! 😉

Robert Markula
Robert Markula
12 years ago
Reply to  Farley

I thought about that “image master” thing as well. Well, the proposed “image engine” could also distribute new images to all frontends as soon as new images (or changes) are detected. So there is only one image repository – on a “master” frontend – and all frontends share a local cache that also works if the master frontend is down.

Example workflow:

1. User uploads/deletes/modifies an image on the image master
2. The image engine periodically checks the image directory for changes
2. As soon as a change is detected, the image engine distributes the new/changed images to all frontends (the master frontend would have to know about the other frontends)
3. The images are present locally as identical copies on all frontends.

The nice thing about that approach would be, that everything is done transparently to the user – all he has to do is upload an image to the master frontend, and everything else is done automatically. No worries about ssh, NFS or anything else.

Would love your thoughts, please comment.x