Apologies for the previously broken links! The latest version on GitHub (below) should now work for both versions. Check out the fancy workbook! CPU / Memory usage will come back soon, the log format changed in 2019.1. ——————————

Download the latest version (.exe): here

GitHub code for both (+ workbooks) available: here

Latest workbook – click the image below to interact:


I’m going to immediately caveat this with the fact that this is unsupported – although I am interested if this thing breaks! It’s also purpose built for Windows logs on 2018.2+ and Linux on 2018.1+.


For the eagle-eyed Server Admins amongst us, you may have noticed a few changes to your log files following the introduction of TSM. First of all, the structure has changed ever so slightly. Logs are now located in this directory:

An extra level added to the hierarchy!

Second, there’s a slightly tweak to the naming convention. The old vizqlserver* logs have an additional tag of nativeapi_ on the front. This is a minor change to the naming convention but helps identify where the rich detailed logs lie.

Finally, and most importantly, there’s a change in the logging process itself. And it comes as part of a program we’ve called Activity Resource Tracing – ART, for short. ART introduces in line memory and CPU logging for certain important statements and looks a little like this:

Memory & CPU lines included

You can see the wealth of information that lies in the end-query k value. Wouldn’t it be nice to extract the information from each line? Imagine the possibilities:

  • Identify workbooks that use large amounts of CPU or memory.
  • Spot users that could benefit from some gentle advice on best practices.
  • Visualise usage peaks and identify if your server is hitting capacity.

During Tableau Conference Europe, my colleague Alex Ross and I covered a few different ways to extract that information. But now, I want to throw one more method in to the mix: LumberSnake.

LumberSnake is an evolution of some extracting processes we been adopting internally at Tableau, originally pioneered by David Spezia and Lumberjack. At its core, it’s a simple way to extract the super useful vizql logs and transform it in a way that makes sense for Tableau to use. Developed in Python, all you have to do is run it, select your extracted viz logs, and let the program do the rest. Simple!

How to use LumberSnake

Better yet, I’ve made this open source so the full code is available on GitHub! As it is a Git project, feel free to contribute if you spot any enhancements. If you do want to run this directly from Python, made sure you have Python 2.7+ installed as well as the Pandas library.

LumberSnake will output a .csv file with your transformed data. I’ve also included the base workbook I currently use in the Git repository so you can point it at your output and enjoy!

Check it out and let me know your thoughts on Twitter. In the meantime, expect a few more blog posts in the future that you can use as a reference behind the information you extract here.


13 Comments

Vamsi · 17 September 2018 at 2:24 pm

Got the following error after installing all the prerequisites. Did anyone find any fix for this?

Installed it on Windows Server 2012 R2 with Python | Pandas | NumPy | Dateutil | Pytest > 3

Begin Processing …
Checking Log Dump for files …
Cleaning all files in filepath .\Log Dump\
worker1.zip
worker2.zip
failed on tabadminservice/inprogress_remote_logs.zip
[Errno 2] No such file or directory: ‘./Log Dump/tabadminservice/inprogress_remo
te_logs.zip’
Successfully extracted worker1.zip!
Successfully extracted worker2.zip!
.\Log Dump\worker1.zip has no data!
.\Log Dump\worker2.zip has no data!
qp-batch processed …
Traceback (most recent call last):
File “LumberSnake20181.py”, line 204, in
File “LumberSnake20181.py”, line 122, in get_merged
IndexError: list index out of range
[33172] Failed to execute script LumberSnake20181

    Tom · 18 September 2018 at 8:39 am

    Hey Vasmi,
    What version of Tableau Server (+ OS) did you run these against?
    It looks like you’re potentially running the 2018.1 version against a set of 2018.2 logs 🙂
    Another thing worth checking is if you have any nativeapi_vizqlserver (or vizqlserver) entries in your log zip.
    Tom

Lalit · 11 January 2019 at 4:04 pm

Hey Tom,
This is excellent way to show the resource consumption with logs. I was wondering if we can show the Overall CPU % as well using this log file data? Also, I’m looking to group the CPU by User or System CPU so we can identify if housekeeping activities are taking longer then usual. I want to show this data for each of my Site as % CPU utilization so I need a total capacity mark. Do you know if that is possible or I should be using Windows server logs?

Thanks in Advance.

–Lalit

Vensus · 1 March 2019 at 12:27 pm

I got the below error when I ran this script( LumberSnake pre-2018.2.py).

Please help!!
Begin Processing …
Checking Log Dump for files …
Cleaning all files in filepath .\Log Dump\
worker1.zip
worker2.zip
vizqlserver/Logs/vizqlserver_1-0_2019_02_26_09_42_08.txt
vizqlserver/Logs/vizqlserver_1-0_2019_02_27_09_42_17.txt
vizqlserver/Logs/vizqlserver_1-1_2019_02_26_09_58_34.txt
vizqlserver/Logs/vizqlserver_1-1_2019_02_27_09_58_46.txt
vizqlserver/Logs/vizqlserver_1-2_2019_02_26_09_39_19.txt
vizqlserver/Logs/vizqlserver_1-2_2019_02_27_09_39_19.txt
vizqlserver/Logs/vizqlserver_1-3_2019_02_26_09_47_40.txt
vizqlserver/Logs/vizqlserver_1-3_2019_02_27_09_48_14.txt
Successfully extracted worker1.zip!
Successfully extracted worker2.zip!
.\Log Dump\01_01_vizqlserver_1-0_2019_02_26_09_42_08.txt file processed
.\Log Dump\02_02_vizqlserver_1-0_2019_02_27_09_42_17.txt file processed
.\Log Dump\03_03_vizqlserver_1-1_2019_02_26_09_58_34.txt file processed
.\Log Dump\04_04_vizqlserver_1-1_2019_02_27_09_58_46.txt file processed
.\Log Dump\05_05_vizqlserver_1-2_2019_02_26_09_39_19.txt file processed
.\Log Dump\06_06_vizqlserver_1-2_2019_02_27_09_39_19.txt file processed
.\Log Dump\07_07_vizqlserver_1-3_2019_02_26_09_47_40.txt file processed
.\Log Dump\08_08_vizqlserver_1-3_2019_02_27_09_48_14.txt file processed
.\Log Dump\worker1.zip has no data!
.\Log Dump\worker2.zip has no data!
qp-batch processed …
Traceback (most recent call last):
File “pandas\_libs\parsers.pyx”, line 1162, in pandas._libs.parsers.TextReader
._convert_tokens (pandas\_libs\parsers.c:14858)
File “pandas\_libs\parsers.pyx”, line 1273, in pandas._libs.parsers.TextReader
._convert_with_dtype (pandas\_libs\parsers.c:17119)
File “pandas\_libs\parsers.pyx”, line 1289, in pandas._libs.parsers.TextReader
._string_convert (pandas\_libs\parsers.c:17347)
File “pandas\_libs\parsers.pyx”, line 1524, in pandas._libs.parsers._string_bo
x_utf8 (pandas\_libs\parsers.c:23041)
UnicodeDecodeError: ‘utf-8’ codec can’t decode byte 0xcd in position 23005: inva
lid continuation byte

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “D:\Tableau Admin\Lumbersnake\LumberSnake pre-2018.2.py”, line 205, in
get_merged(ufiles).to_csv(‘LumberSnake.csv’)
File “D:\Tableau Admin\Lumbersnake\LumberSnake pre-2018.2.py”, line 125, in ge
t_merged
dfoutput = dfoutput.merge(pd.read_csv(f, **kwargs), how=’outer’)
File “C:\Python34\lib\site-packages\pandas\io\parsers.py”, line 655, in parser
_f
return _read(filepath_or_buffer, kwds)
File “C:\Python34\lib\site-packages\pandas\io\parsers.py”, line 411, in _read
data = parser.read(nrows)
File “C:\Python34\lib\site-packages\pandas\io\parsers.py”, line 1005, in read
ret = self._engine.read(nrows)
File “C:\Python34\lib\site-packages\pandas\io\parsers.py”, line 1748, in read
data = self._reader.read(nrows)
File “pandas\_libs\parsers.pyx”, line 890, in pandas._libs.parsers.TextReader.
read (pandas\_libs\parsers.c:10862)
File “pandas\_libs\parsers.pyx”, line 912, in pandas._libs.parsers.TextReader.
_read_low_memory (pandas\_libs\parsers.c:11138)
File “pandas\_libs\parsers.pyx”, line 989, in pandas._libs.parsers.TextReader.
_read_rows (pandas\_libs\parsers.c:12175)
File “pandas\_libs\parsers.pyx”, line 1117, in pandas._libs.parsers.TextReader
._convert_column_data (pandas\_libs\parsers.c:14136)
File “pandas\_libs\parsers.pyx”, line 1169, in pandas._libs.parsers.TextReader
._convert_tokens (pandas\_libs\parsers.c:14972)
File “pandas\_libs\parsers.pyx”, line 1273, in pandas._libs.parsers.TextReader
._convert_with_dtype (pandas\_libs\parsers.c:17119)
File “pandas\_libs\parsers.pyx”, line 1289, in pandas._libs.parsers.TextReader
._string_convert (pandas\_libs\parsers.c:17347)
File “pandas\_libs\parsers.pyx”, line 1524, in pandas._libs.parsers._string_bo
x_utf8 (pandas\_libs\parsers.c:23041)
UnicodeDecodeError: ‘utf-8’ codec can’t decode byte 0xcd in position 23005: inva
lid continuation byte
>>>

-Vensus

Martin Pohlers · 26 March 2019 at 5:07 pm

The links for the 2018.2+ installer is not valid unfortunately.
If I use the version from Github do I only need to extract the vizqlserver logs and in there I only need to keep the ones with the nativeapi prefix or do I unzip all logs?

Dan · 26 April 2019 at 9:09 pm

Hey Tom,

Seems I am unable to find the windows version with an executable version for logsnake the link is broken and takes me to an expired egnyte link.

    Tom · 9 July 2019 at 8:59 am

    Done! Took a while but a replacement is up and running now.

srikanth vadlamannati · 3 June 2019 at 5:02 pm

I am not able to Download the latest version (.exe) for 2018.2+: here , can you please provide updated link plz.

Chuck · 12 July 2019 at 5:42 am

Hi Tom,

i’m using your old version of the lumbersnake that produce the csv file. We are able to ran successfully and produce csv file, but it’s showing nothing on the dashboard due to missing “Query Category” column in the csv file.

We are on 2018.3.7 version and we are unable to use your new version that produce json file due to we have large nativeapi_vizqlserver*.logs, which will generate over 1gb of json.

Do you know what cause we are missing the Query Category column is our csv file?

    Tom · 12 July 2019 at 8:09 am

    Hey Chuck,
    I’ve seen this a couple of times where the script fails to un-nest(?) one of the lowest JSON structures. If you actually check the columns, I think you’ll see the details combined in one of the results and should be able to split that out. Let me know how you get on!
    Tom

Phil · 4 August 2019 at 8:58 pm

Great stuff, Tom! I’m compiling a long-term analysis review for a customer and have 6 months worth of pre-TSM Tableau Server logs that I’d like to run through this fantastic utility. The only problem is that I can’t seem to get “askopenfilenames” to work so that I can multi-select all the ziplog archives that I have. I’ve also tried to zip up all the archives into a master archive, but that does work. My most recent attempt was using File Locator Pro to grep out the “vizqlserver_” logs and place them all into a single zip archive, but that wasn’t successful either. I don’t have any background in Python coding, so would greatly appreciate any assistance you can provide that will enable me to process multiple archives at once.

Many thanks in advance!

Arun · 2 June 2020 at 10:04 am

Hi Tom, Great great stuff. But I’m stuck with some issue. I’m providing tableau server log file ( one big zip file having all the logs generated using ziplog command) but then this python as well as the exe file gives me only one hyper file as the output. The workbook published on tableau public is looking for a flat file. Am I missing anything here ? Even the workbooks in. this github are also looking for additional hyper files. Any help here will be much appreciated.

Leave a Reply to Vensus Cancel reply

Your email address will not be published. Required fields are marked *