Open deajan opened 2 weeks ago
getting user list should not include mailing lists IMO.
i know exactly how you feel :rofl: . that mlist was historicaly seen a bit different but i can only concur that could be divided to somethine else.
And the --filter=
feature isn't that good at filtering or i just couldn't make out how to do it.
I ended up just using bonkers jq
with the json-output..
e.g.
grom_doms is a function
grom_doms ()
{
grommunio-admin domain query domainname # not needed anymore. ->. | sed '/domainname/d'
}
grom_users is a function
grom_users ()
{
local doms;
if [[ $# -ge 1 ]]; then
doms=$(grep "^${1}" <<<"$(grom_doms)");
else
doms=$(grom_doms);
fi;
for dom in $doms;
do
grommunio-admin user query username status maildir --format json-structured | jq -r '.[]|select((.username|endswith("'"${dom}"'")) and (.maildir|.!="") and ((.status==0) or (.status==4)))|.username';
done
}
grommunio-admin user query
returns users, but also groups, with no way to differentiate users from groups.
it will also give you contacts which at least only show up as 'contact-NNN' (when not of your domain) and status 5/contact
You can differentiate when you query ID status maildir username(not really but could help..)
i wrote this thingy once but never extended it to orgs and contacts but that might help ^_^
#!/bin/bash
set -e
set -x
# retrieve mailbox folders, usually /var/lib/gromox/{user,domain}
domainPrefix=$(grommunio-admin config get options.domainPrefix)
userPrefix=$(grommunio-admin config get options.userPrefix)
# retrieve all configured domains
domaindata="$(grommunio-admin domain query --format=csv --separator=';' homedir ID domainname | grep "^${domainPrefix}")"
userdata="$(grommunio-admin user query --format=csv --separator=';' maildir ID username domainID | grep "^${userPrefix}")"
#
# grommunio-admin domain query --format json-structured ID orgID domainname activeUsers address adminName chat chatID displayname domainStatus endDay homedir homeserverID inactiveUsers maxUser tel title virtualUsers
# grommunio-admin org query --format json-structured ID name domainCount description
# grommunio-admin user query --format json-structured ID aliases changePassword chat chatAdmin domainID forward homeserverID lang ldapID maildir pop3_imap privArchive privChat privFiles privVideo publicAddress smtp status username
#
for domain in ${domaindata[@]}; do
domainID="$(awk -F';' '{print $2}' <<< "$domain")"
domainname="$(awk -F';' '{print $3}' <<< "$domain")"
homedir="$(awk -F';' '{print $1}' <<< "$domain")"
users="$(awk -F';' '$4=='$domainID'' <<< "$userdata")"
for user in $users; do
read maildir username <<< $(awk -F';' '{print $1, $3}' <<< "$user")
done
done
@crpb Thank you for your reply.
I too did tinker around with the CLI output. Also noticed that commands like grommunio-admin exmdb <user> store get
don't have json output, so it's not that easy to work with, and are painfully slow.
I'm currently making a prometheus exporter for Grommunio. I'll post about it on the forum shortly. Perhaps you want to have a quick look: https://github.com/netinvent/grommunio_exporter/tree/v0.2.1 I'm open to any suggestions.
@deajan depending on if the data is only accessible by people who would be allowed to see sch data i would suggest adding folderstatistics. the data is all in the mail stores exmdb/exchange.sqlite3. i just tinkered with that these days. Still working on it but here is a little teaser
Groups can be excluded from the results by using the --filter mlist=
option (yes, with an empty value).
Sorry for the missing documentation, I will add that to the man page.
@juliaschroeder Works for me, thank you.
As side question, currently developping a prometheus exporter for grommunio, and I noticed that getting the mailbox sizes and quotas via grommunio-admin exmdb <user> store get
is painfully ressource hungry, and doesn't output anything easily parsable.
Is there any alternative I could use to gather these information ? (sorry to piggyback this thread)
and doesn't output anything easily parsable.
this reminds me of this :P
trick 17 for silly
["settings":{"zarafa"....
explosionsCOLUMNS=80 grommunio-admin exmdb user@dom.tld store get
Originally posted by @crpb in 9281282
and you can just do an grommunio-admin exmdb user store get PROPVAL
for the single return.
Initializing the Python environment, especially the SQLAlchemy runtime, takes a lot of time. If possible, you can pre-generate commands and pipe them into grommunio-admin shell
, e.g.
grommunio-admin shell -x <<EOF
exmdb user1@example.com store get
exmdb user2@example.com store get
EOF
However, that will make the output even more difficult to work with…
Another factor is that the mailbox access which occurs for every user, which can take some time if the mailbox is not already loaded and has to be opened (by the gromox-http
process). Unfortunately, there is nothing that can be done about that overhead.
I will add the options to produce output in JSON and CSV format in the next days to at least make the parsing easier.
grommunio-admin shell -x <<EOF exmdb user1@example.com store get exmdb user2@example.com store get EOF
However, that will make the output even more difficult to work with…
ask the people who only do those shenanigangs 📦 this was just the first try.
echo "exmdb cb@dadada.de store get normalmessagesizeextended" | grommunio-admin shell 2>/dev/null
@crpb @juliaschroeder Thank you.
Currently, I'm parsing exmdb output via awk to produce json output via grommunio-admin exmdb user1@example.com store get | awk ' BEGIN { printf"[" } {if ($1~/^0x/) {next} ; printf"\n%s{\"%s\": \"%s\"}", sep,$1,$2; sep=","} END { printf"]\n"}'
. This produces json output, but of course is really not a foolproof solution.
Indeed, having json output would be much easier ^^
@crpb I'm not feeling really comfortable with directly tinkering with the sqllite databases, since everything could break on internal data schema changes. The solution of using one grommunio-admin shell to gather all the info might be the best one.
I've made some tests, getting a single value via store get normalmessagesizeextended
or getting all values both take about 0,8s on my test system.
Getting 4 users in a single session takes 0,9s, so that's definitly the good way to go.
I'll just have to change the awk process to something using regular expressions.
i just got this one
{ grommunio-admin shell -x <<< "exmdb cb@moode store get normalmessagesizeextended" 2>/dev/null; } | sed '1d;/^$/d'
times:
real 0m0.869s
user 0m0.783s
sys 0m0.080s
EDIT: as said before, just query what you want and not the thing w/o a propval
gromi:~ # time { grommunio-admin shell -x <<< "exmdb cb@as.de store get prohibitreceivequota prohibitsendquota assocmessagesizeextended normalmessagesizeextended" 2>/dev/null; } | sed '1d;/^$/d'
prohibitreceivequota 921600 900 MiB
prohibitsendquota 768000 750 MiB
assocmessagesizeextended 758236 740 kiB
normalmessagesizeextended 456801370 436 MiB
real 0m0.863s
user 0m0.776s
sys 0m0.087s
@crpb I'm not feeling really comfortable with directly tinkering with the sqllite databases, since everything could break on internal data schema changes. The solution of using one grommunio-admin shell to gather all the info might be the best one.
you have seen that it's all temporary tables and -readonly
calls?
you have seen that it's all temporary tables and -readonly calls?
Perhaps. But having an API that does the job is enough for me, and doesn't require getting the sqlite file paths ^^
depending on if the data is only accessible by people who would be allowed to see sch data i would suggest adding folderstatistics. the data is all in the mail stores exmdb/exchange.sqlite3. i just tinkered with that these days. Still working on it but here is a little teaser
As a sysadmin, I cannot imagine what the folderstatistics are for an a monitoring tool. What's the goal ? Mine is to know whether a mailbox reaches the quotas, and act before it's too late.
As a sysadmin, I cannot imagine what the folderstatistics are for an a monitoring tool. What's the goal ? Mine is to know whether a mailbox reaches the quotas, and act before it's too late.
it was a thought. i also know people who keep mailboxes very clean and the next maybe wants to know if there is anything Stuck in a process which would usually clear out a folder but didn't and that thing just doesn't warn about it or it just didn't know..
I get the idea... But folderstatistics would add alot of data (one entry per folder name, with every user having it's own idea of folder names I guess).
Initializing the Python environment, especially the SQLAlchemy runtime, takes a lot of time. If possible, you can pre-generate commands and pipe them into grommunio-admin shell, e.g. grommunio-admin shell -x <<EOF exmdb user1@example.com store get exmdb user2@example.com store get EOF
I've just coded something around the shell, and it's pretty fast, but the output is not coherent. I don't get to parse the output via regex because sometimes I get newlines in the middle of data. I've managed to parse the output with some little awk magic. Looking out forward for json support.
As for now, my requests take a couple of seconds regardless of the number of users, although I didn't test with "big" grommunio setups, it should scale better. Again, thanks.
Hi, I added the format argument to exmdb … store get
(23c560ab91cc9afc337a7b3cc4fc921a0554a831).
I also added two new formats, json-kv
and json-object
(9d08ffec03c3abcf7d535908f7d967d9dc9c5f05), because I felt like especially the former would make a lot of sense for the described use case.
@deajan fwiw, that might be faster :p.
gromi:/var/lib/gromox/user/5/2 # grommunio-admin taginfo prohibitreceivequota prohibitsendquota assocmessagesizeextended normalmessagesizeextended
0x666a0003 (1718222851): PROHIBITRECEIVEQUOTA, type LONG
0x666e0003 (1718484995): PROHIBITSENDQUOTA, type LONG
0x66b40014 (1723072532): ASSOCMESSAGESIZEEXTENDED, type LONGLONG
0x66b30014 (1723006996): NORMALMESSAGESIZEEXTENDED, type LONGLONG
gromi:/var/lib/gromox/user/5/2 # time sqlite3 -readonly /var/lib/gromox/user/5/2/exmdb/exchange.sqlite3 'SELECT proptag,propval FROM store_properties WHERE proptag IN (1718222851,1718484995,1723072532,1723006996);'
1718222851|921600
1718484995|768000
1723006996|459164156
1723072532|758312
real 0m0.005s
user 0m0.000s
sys 0m0.005s
@crpb Thanks, but I think using your solution will need to query the maildir location on a per user basis using grommunio-admin, which will make it again slow, as suggested by @juliaschroeder since it will again initialize python + sqlalchemy env per user.
I guess the current implementation is already quite fast (I don't have a "big" grommunio server to test though), but using 0.9 seconds for 4 users instead of 0,8s is pretty decent already, since it would require 0,025s per user in a synthetic benchmark.
I think that once I get to parse the new json output, it will be even faster since I won't rely on terrible awk hack.
Still, I really appreciate your feedback, thank you.
but I think using your solution will need to query the maildir location on a per user basis using grommunio-admin,
just fyi, if it comes to be slow somewhere you would have to do an mysql grommunio --execute'select username,maildir from users'
(w/o thinking about any extras here like username/password of course) iirc.
@juliaschroeder LGTM, the new format json-kv
format seems pretty nice to parse.
I still have to do some awk
magic to transform the per user json blocks into a list when using shell EOF method.
Would it be complicated to add a grommunio-admin exmdb --all-users store get
command that would directly return a list of json objects perhaps ? If not, the awk trick will do ;)
While developping around
grommunio-admin
, I noticed thatgrommunio-admin user query
returns users, but also groups, with no way to differentiate users from groups. Pretty sure that should be a job forgrommunio-admin mlist list
, getting user list should not include mailing lists IMO.Using