The Amazon conference “re:Invent” is taking place in Las Vegas at the moment. For a while, I’m using the Amazon cloud services (EC2) mainly to run lab and research systems. Amongst the multiple announcements they already made during the conference, one of them caught my attention: “CloudTrail“. Everything has already been said over the pro & con of cloud computing. But one of them is particularly frustrating if, like me, you like to know what’s happening and to keep an eye on your infrastructure (mainly from a security point of view): who’s doing what, when and from where with your cloud resources? CloudTrail can help you to increase your visibility and is described by Amazon as follow:
CloudTrail provides increased visibility into AWS user activity that occurs within an AWS account and allows you to track changes that were made to AWS resources. CloudTrail makes it easier for customers to demonstrate compliance with internal policies or regulatory standards.
As explained in the Amazon blog post, once enabled, CloudTrail will generate files with events in a specific S3 bucket (that you configure during the setup). Those files will be available like any other data. What about grabbing files at regular interval and create a local logfile that could be processed by a third party tool like… OSSEC?
Generated events are stored as JSON data in gzipped files. I wrote a small Python script which downloads these files and generates a flat file:
$ ./getawslog.py -h Usage: getawslog.py [options] Options:  --version            show program's version number and exit -h, --help           show this help message and exit  -b LOGBUCKET, --bucket=LOGBUCKET                        Specify the S3 bucket containing AWS logs  -d, --debug          Increase verbosity  -l LOGFILE, --log=LOGFILE                        Local log file  -j, --json           Reformat JSON message (default: raw)  -D, --delete         Delete processed files from the AWS S3 bucket $ ./getawslog.py -b xxxxxx -l foo.log -d -j -D +++ Debug mode on +++ Connecting to Amazon S3 +++ Found new log: xxxxxxxxxxxx_CloudTrail_us-east-1_20131114T1325Z_xxx.json.gz +++ Found new log: xxxxxxxxxxxx_CloudTrail_us-east-1_20131114T1330Z_xxx.json.gz +++ Found new log: xxxxxxxxxxxx_CloudTrail_us-east-1_20131114T1335Z_xxx.json.gz +++ Found new log: xxxxxxxxxxxx_CloudTrail_us-east-1_20131115T0745Z_xxx.json.gz +++ Found new log: xxxxxxxxxxxx_CloudTrail_us-east-1_20131115T0745Z_xxx.json.gz +++ Found new log: xxxxxxxxxxxx_CloudTrail_us-east-1_20131115T0750Z_xxx.json.gz
By default, the script will just append the JSON data into the specified file. If you use the “-j” switch, it will parse the received event and store them in a much more convenient way to be further processed by OSSEC (using “items:values” pairs). Here is an example of parsed event:
"eventVersion":"1.0","eventTime":"2013-11-15T07:55:53Z", "requestParameters":"{u'instancesSet': {u'items': [{u'instanceId': u'i-415f473b'}]}}","responseElements":"{u'instancesSet': {u'items': [{u'instanceId': u'i-415f473b', u'currentState': {u'code': 32, u'name': u'shutting-down'}, u'previousState': {u'code': 16, u'name': u'running'}}]}}","awsRegion":"us-east-1","eventName":"TerminateInstances","userIdentity":"{u'principalId': u'xxxxxxxxxxxx', u'accessKeyId': u'xxxxxxxxxxxxxxxxxxxx', u'sessionContext': {u'attributes': {u'creationDate': u'2013-11-15T07:48:03Z', u'mfaAuthenticated': u'false'}}, u'type': u'Root', u'arn': u'arn:aws:iam::xxxxxxxxxxxx:root', u'accountId': u'xxxxxxxxxxxx'}","eventSource":"ec2.amazonaws.com","userAgent":"EC2ConsoleBackend","sourceIPAddress":"xxx.xxx.xxx.xxx"
Within OSSEC, create a new decoder which will extract the information you may find relevant for you. Here is mine:
<decoder name="cloudtrail"> <prematch>^"eventVersion":"\d.\d"</prematch> <regex>"awsRegion":"(\S+)"\.+"eventName":"(\S+)"\.+"sourceIPAddress":"(\d+.\d+.\d+.\d+)"$</regex> <order>data,action,srcip</order> </decoder>
And the event below decoded by OSSEC:
**Phase 1: Completed pre-decoding. full event: '"eventVersion":"1.0","eventTime":"2013-11-15T07:55:53Z","requestParameters":"{u'instancesSet': {u'items': [{u'instanceId': u'i-415f473b'}]}}","responseElements":"{u'instancesSet': {u'items': [{u'instanceId': u'i-415f473b', u'currentState': {u'code': 32, u'name': u'shutting-down'}, u'previousState': {u'code': 16, u'name': u'running'}}]}}","awsRegion":"us-east-1","eventName":"TerminateInstances","userIdentity":"{u'principalId': u'xxxxxxxxxxxx', u'accessKeyId': u'xxxxxxxxxxxxxxxxxxxx', u'sessionContext': {u'attributes': {u'creationDate': u'2013-11-15T07:48:03Z', u'mfaAuthenticated': u'false'}}, u'type': u'Root', u'arn': u'arn:aws:iam::xxxxxxxxxxxx:root', u'accountId': u'xxxxxxxxxxxx'}","eventSource":"ec2.amazonaws.com","userAgent":"EC2ConsoleBackend","sourceIPAddress":"xxx.xxx.xxx.xxx"' hostname: 'boogey' program_name: '(null)' log: '"eventVersion":"1.0","eventTime":"2013-11-15T07:55:53Z","requestParameters":"{u'instancesSet': {u'items': [{u'instanceId': u'i-415f473b'}]}}","responseElements":"{u'instancesSet': {u'items': [{u'instanceId': u'i-415f473b', u'currentState': {u'code': 32, u'name': u'shutting-down'}, u'previousState': {u'code': 16, u'name': u'running'}}]}}","awsRegion":"us-east-1","eventName":"TerminateInstances","userIdentity":"{u'principalId': u'xxxxxxxxxxxx', u'accessKeyId': u'xxxxxxxxxxxxxxxxxxxx', u'sessionContext': {u'attributes': {u'creationDate': u'2013-11-15T07:48:03Z', u'mfaAuthenticated': u'false'}}, u'type': u'Root', u'arn': u'arn:aws:iam::xxxxxxxxxxxx:root', u'accountId': u'xxxxxxxxxxxx'}","eventSource":"ec2.amazonaws.com","userAgent":"EC2ConsoleBackend","sourceIPAddress":"xxx.xxx.xxx.xxx"'
**Phase 2: Completed decoding. decoder: 'cloudtrail' extra_data: 'us-east-1' action: 'TerminateInstances' srcip: 'xxx.xxx.xxx.xxx'
So easy! Schedule the script via a cron job to grab automatically new events and happy logging! The CloiudTrail service is still in beta and is not (yet) available everywhere (ex: not in the EU region) but seems to be working quite well. My script is available here.
Hi,
I just want to keep my cloudtrail logs in s3 bucket ,which i can achieve by removing -D form command .But when i run the command again to fetch new log it again full the log which i already pulled. so how can i achieve that without deleting the file i want to pull differntial log .
plz help
Feel free to pull a request!
I added a little more exception handling, if anyone would like my changes.
The regex provided in the decoder definition is now out of date. Event versions have progressed to 1.01, so the regex should have another digit:
^”eventVersion”:”\d.\d\d”
RT @xme: [/dev/random] Keep an Eye on Your Amazon Cloud with #OSSEC http://t.co/iCj38G0jIn #AWS
RT @xme: [/dev/random] Keep an Eye on Your Amazon Cloud with #OSSEC http://t.co/iCj38G0jIn #AWS
RT @xme: [/dev/random] Keep an Eye on Your Amazon Cloud with #OSSEC http://t.co/iCj38G0jIn #AWS
RT @xme: [/dev/random] Keep an Eye on Your Amazon Cloud with #OSSEC http://t.co/iCj38G0jIn #AWS
RT @xme: [/dev/random] Keep an Eye on Your Amazon Cloud with #OSSEC http://t.co/iCj38G0jIn #AWS
RT @xme: [/dev/random] Keep an Eye on Your Amazon Cloud with #OSSEC http://t.co/iCj38G0jIn #AWS