Browsed by
Category: Python

How to Download Your Fitbit Second-Level Data Without Coding

How to Download Your Fitbit Second-Level Data Without Coding

If you are a Fitbit user who wants to save a copy of Fitbit data on your computer but doesn’t have advanced programming skills , this tutorial is right for you! You don’t need to do any coding at all to save your second-level data. I have been struggling with getting all the so-called ‘intraday data’ for quite a while. I have found many useful resources online, for example, Paul’s tutorial,  Collin Chaffin’s Powershell module, and the Fitbit-Python API, but they are somehow complicated and I just could not make any of these working for me smoothly.  Recently I finally figured out a way to download these Fitbit data without any coding. Are you ready?

Step 1: Register an app on

First you need to register an account on, and then click ‘MANAGE YOUR APPS’ on the top right area. Next you need to click ‘Register a new app’ button at the top right area to start with. Well, this step is simple, just make sure the OAuth 2.0  Application Type is set to ‘Personal’, and the Callback URL is complete – including the ‘http://’ part and also a ‘/’ at the end. Here I used ‘’ as an example. I have used this blog’s URL ‘’ which worked just fine too.


After you click the red ‘Register’ button, you will be able to see the credentials for the app you just registered. The ‘OAuth 2.0 Client ID’ and ‘Client Secret’ will be used in the next step.



Step 2: Use the OAuth 2.0 tutorial page

Next you need to right click the ‘OAuth 2.0 tutorial page’ link and open it in a new tab, so that you can look back at your app’s credentials easily. Make sure the ‘Implicit Grant Flow’ is chosen instead of ‘Authorization Code Flow’ – this will make things much easier! After you copy/paste the ‘Client ID’ and ‘Client Secret’ into the blanks and put in the Redirect URI, click the auto-generated link.


Then you will see the confirmation page. Just click ‘Allow’.


And you will be led to the ‘Redirect URI’, which is ‘’ in this case.  But the address bar now shows a very long string which is the token for your app.


Next you need to copy and paste everything in the address bar but without the starting part ( ) to the ‘Parse response’ section, and hit enter key once. This way you can clearly see what the token is, what the scope is, and how long the token is valid. In this case, the token is valid for 1 week, which equals to 604800 seconds. Pretty good, right?



Step 3: Make request and get the data!

After you are done with the ‘Parse response’, the next step is ready for you automatically.


Justclick the ‘Send to’ link and ‘Launch Request’ in the new page. Make sure to tell the web that you are not a robot too.


After that, if you see the ‘200 OK’ status – meaning everything works fine. And you can find the data you want in the lower half of the page. Like this:


If you click ‘view raw’ at the right side, you will see the ‘BODY’ will be changed to raw text file and you can simply copy / paste them to a text editor. And that’s all you do to download your Fitbit second-level data! As I promised, you don’t need to know any programming skills to accomplish this, isn’t that cool?

Additional tips:

tip 1: other data types

If you want some other data rather than the ‘user profile’, you can simply change the ‘API endpoint URL’ in the ‘Make Request’ step. According the the Fitbit API documentation, you can get the heart rate data by using this URL:

Or the sleep data by using:

tip 2: save json file in an easier way

If you think the ‘Send to’ method is not fast enough, you can copy the ‘curl’ command auto-generated in the ‘Make Request’ step and run it in terminal (for Windows, that is the ‘cmd’ window). Add the following part to the end of the copied ‘curl’ command so that the data will be saved to your disk:

>> data_file.json

tip 3: save multiple days’ data automatically

I think Fitbit provides the functionality to let uers download multiple days’ data with one command, by specifying the date rage in the curl request. However I could not make it work for me for some unknown reason. Therefore I used a piece of Python code to download multiple days’ data for me. Here’s the code, which I think is pretty self-explanatory.

import requests
import json
import pandas as pd
from time import sleep
# put the token for your app in between the single quotes
token = ''
# make a list of dates 
# ref:
# You can change the start and end date as you want
# Just make sure to use the yyyy-mm-dd format
start_date = '2015-12-28'
end_date = '2016-06-14'
datelist = pd.date_range(start = pd.to_datetime(start_date),
                         end = pd.to_datetime(end_date)).tolist()
The codes below use a for loop to generate one URL for each day in the datelist,
and then request each day's data and save the data into individual json files.
Because Fitbit limit 150 request per hour, I let the code sleep for 30 seconds 
between each request, to meet this limitation.
for ts in datelist:
    date = ts.strftime('%Y-%m-%d')
    url = '' + date + '/1d/1sec/time/00:00/23:59.json'
    filename = 'HR'+ date +'.json'
    response = requests.get(url=url, headers={'Authorization':'Bearer ' + token})
    if response.ok:
        with open(filename, 'w') as f:
            json.dump(response.content, f)
        print (date + ' is saved!')
        print ('The file of %s is not saved due to error!' % date)

Happy hacking!

How to Get 0.99+ Accuracy in Kaggle Digit Recognizer Competition

How to Get 0.99+ Accuracy in Kaggle Digit Recognizer Competition

Recently I have spent a lot of time working on the Kaggle digit recognizer competition and finally reached an accuracy higher than 0.99. I am quite happy with it and would like to share with everyone how I did it. Basically I used TensorFlow to build a neural network with these ‘highlights’:

  1. three hidden layers, with some dropout between each layer, but no convolution in them
  2. an 25 times larger training data set – generated by nudging original training images to up, down, left, and right for 1 pixel each
  3. an exponential decay learning rate

You can find the code here.

Unlike some other scripts on like this one and this one, my neural network does not use convolution, mainly because I do not have a GPU and do not want to pay for the AWS… However, I think my neural network did a good job just as these two.

06022016 rank 163

I learned a lot about machine learning through this 101 Kaggle competition. The biggest lesson I learned is that: picking the right model is more important than fine-tuning the parameters in a model. Just like any other tasks, finding the right tool is always the very first step. Before I decided to use neural network, I have tried several other models already, including logistic regression, SVM, and k-nearest neighbor, but the accuracy never went above 0.97, no matter how hard I tried to fine-tune the model parameters.

The second biggest lesson is that: using a large training data set really helps with improving neural network’s accuracy. I adopted this ‘nudging images’ idea from an example on I really like this idea and probably will keep using it for other projects.

Last but not least, I realized that it is not good just working on your own. I need to read about what other people have done, talk to different person even machine learning laymen, and listen to other peoples criticism whenever they are kind enough to do so. So, what is your criticism on this neural network, please?