I found a Bug in Scylladb enterprising auditing, where for CREATE TABLE, DROP TABLE, these types of queries sometimes generating duplicate audit logs

Executed Queries:

CREATE TABLE loads(ID int PRIMARY KEY);
DROP TABLE loads;

Audit Logs:

2024-01-30T11:28:57.577649+00:00 ip-172-31-63-35.ec2.internal scylla-audit: "172.31.63.35:0", "DDL", "ONE", "loads", "mykeyspace", "CREATE TABLE loads(ID int PRIMARY KEY);", "127.0.0.1:0", "cassandra", "false"
2024-01-30T11:28:57.577951+00:00 ip-172-31-63-35.ec2.internal scylla-audit: "172.31.63.35:0", "DDL", "ONE", "loads", "mykeyspace", "CREATE TABLE loads(ID int PRIMARY KEY);", "127.0.0.1:0", "cassandra", "false"
2024-01-30T11:29:04.891524+00:00 ip-172-31-63-35.ec2.internal scylla-audit: "172.31.63.35:0", "DDL", "ONE", "loads", "mykeyspace", "DROP TABLE loads;", "127.0.0.1:0", "cassandra", "false"
2024-01-30T11:29:04.891871+00:00 ip-172-31-63-35.ec2.internal scylla-audit: "172.31.63.35:0", "DDL", "ONE", "loads", "mykeyspace", "DROP TABLE loads;", "127.0.0.1:0", "cassandra", "false"

As You can see in above audit logs only difference is timestamp milliseconds, if they are duplicating then why changing timestamps.

Hello, plase make sure you are looking at the correct timestamp, it should be event_time column in the audit table. What you posted doesn’t look like a direct SELECT result. How does your SELECT query look like?

I did not mention SELECT query here. Only CREATE and DROP TABLE.

Actually, I am generating audit log in syslog not in audit table and in syslog format is like this.

Below is audit setting:

# audit setting
# by default, Scylla does not audit anything.
# It is possible to enable auditing to the following places:
#   - audit.audit_log column family by setting the flag to "table"
audit: "syslog"
#
# List of statement categories that should be audited.
audit_categories: "DCL,DDL,AUTH,DML,ADMIN,QUERY"
#
# List of tables that should be audited.
audit_tables: ""
#
# List of keyspaces that should be fully audited.
# All tables in those keyspaces will be audited
audit_keyspaces: "mykespace"

And I change the time format template in rsyslog.conf file for better precise timestamp.

Ok, I see. Are you sure your are not calling CREATE and DROP in a loop? Unfortunatelly this time is not taken directly from query but generated at the time audit log is written. This can cause order to be slightly different, so in theory it can mean that there was a following sequence:

CREATE TABLE loads …
DROP TABLE loads…

CREATE TABLE loads …
DROP TABLE loads…

I am executing these queries inside CQL shell one by one and then inside scylla-audit.log file duplicate logs are populated like this as I mentioned.

I only execute these queries once.

CREATE TABLE loads(ID int PRIMARY KEY);
DROP TABLE loads;

I cannot reproduce it locally by working with the latest enterprise release - which release do you use?
Also, can you please send us your rsyslog.conf file? I was using the default one.
Also, where do you have this scylla-audit.log and is this the file you are reading? I was just looking at the logs with “journalctl”
I suspect there may be some configuration issue, not necessarily on scylla side.

I am using latest Scylladb Enterprise version in AWS EC2 Linux UBUNTU 22.04.

# /etc/rsyslog.conf configuration file for rsyslog
#
# For more information install rsyslog-doc and see
# /usr/share/doc/rsyslog-doc/html/configuration/index.html
#
# Default logging rules can be found in /etc/rsyslog.d/50-default.conf


#################
#### MODULES ####
#################

module(load="imuxsock") # provides support for local system logging
#module(load="immark")  # provides --MARK-- message capability

# provides UDP syslog reception
#module(load="imudp")
#input(type="imudp" port="514")

# provides TCP syslog reception
#module(load="imtcp")
#input(type="imtcp" port="514")

# provides kernel logging support and enable non-kernel klog messages
module(load="imklog" permitnonkernelfacility="on")

###########################
#### GLOBAL DIRECTIVES ####
###########################

#
# Use traditional timestamp format.
# To enable high precision timestamps, comment out the following line.
#
#$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
$template RFC3339Format,"%timestamp:::date-rfc3339% %hostname% %syslogtag%%msg%\n"
$ActionFileDefaultTemplate RFC3339Format

# Filter duplicated messages
$RepeatedMsgReduction off

#
# Set the default permissions for all log files.
#
$FileOwner syslog
$FileGroup adm
$FileCreateMode 0640
$DirCreateMode 0755
$Umask 0022
$PrivDropToUser syslog
$PrivDropToGroup syslog

#
# Where to place spool and state files
#
$WorkDirectory /var/spool/rsyslog

#
# Include all config files in /etc/rsyslog.d/
#
$IncludeConfig /etc/rsyslog.d/*.conf
$template remote-incoming-logs,"/var/log/%HOSTNAME%/%PROGRAMNAME%.log"
*.* ?remote-incoming-logs
& ~

This below template is used for precise time in audit logs.

$template RFC3339Format,"%timestamp:::date-rfc3339% %hostname% %syslogtag%%msg%\n"
$ActionFileDefaultTemplate RFC3339Format

These belwo three-line template will generate Scylla-audit.log file in var/log/hostname directory.

$template remote-incoming-logs,"/var/log/%HOSTNAME%/%PROGRAMNAME%.log"
*.* ?remote-incoming-logs
& ~

I am facing this issue even also for Successful Login using cqlsh u -cassandra p -cassandra. In this case also duplicate audit-logs are generated on successful login in CQLSHELL.

Thanks. I can confirm that the issue reproduces on our end (though a bit differently). I’ll open a github issue to have it fixed and will link it here.

1 Like

Yeah sure thanks.

Please could you also look into this issue also

[User defined Queries are not working in ScyllaDb Enterprise CQL shell and through an error I mentioned in description]

Here’s the Github Issue for the bug discussed in this conversation - https://github.com/scylladb/scylla-enterprise/issues/3861
For the other bug I filed a separate bug report and linked it in the cited forum thread.

I am not able to see anything in the mentioned GitHub issue link which you have provided.
It is showing that page not found

If you wish to track progress of this issue, please contact our enterprise client’s support.

1 Like