Thursday, April 19, 2018

I gave my exit seminar today

Today I had my exit seminar, which is equivalent to the PhD defense since at Berkeley, we don't have a defense like other places. I am one step forward to Dr. Kong now ^)^

It started with my advisor Richard to give an introduction of me, and I really enjoyed working with Richard in the last 7 years, I learned so much from him! Then my friend Kathryn and Felipe prepared a hilarious presentation to toast me, it showed my fun facts in the last 7 years. I then summarized my work at Berkeley into a 40 min talk, it is all about MyShake, the project I worked on during my PhD life here at Berkeley. 7 years work, when I looked back, I really accomplished a lot!

I am really glad that my whole family came to my seminar, even though my parents are not English speakers, they always support me without any asking for returns! I am also really happy that my good colleague and friend Roman from DT also joined me to celebrate my graduation!

I feel really lucky that I could come to Berkeley for a PhD, it is a life change event, that will change my life forever. I really appreciate all the people who helped me here and accompany me during the 7 years. How valuable it is for this great opportunity here.

Now, I am really excited about the new life after my graduation (technically, I still need to submit my dissertation and get signatures from my committee members to graduate). At the same time, I am a little sad about finishing my PhD, it marks a milestone for me! I am so used to my role as a PhD student ^_^ I think there are a lot of interesting things to do in the future, just follow your heart and have fun in the work!








 

Sunday, April 15, 2018

Python: Jenks Natural Breaks

This week we will talk about the Jenks Natural Breaks, it is mostly useful for determining the map ranges. It finds the best way to split the data into ranges, for example, if we have 50 countries, 15 countries with 0 - 3 values, 20 countries with values from 5 - 10, and 15 countries with 15 - 20. Therefore, if we want to plot them on a map with different colors, the best way we are splitting the data is 0-3, 3-10, and 10-20. The Jenks Natural Breaks is an algorithm that will figure this out by grouping the similar values into a group. Let’s see the example below. I am using an existing package - jenkspy, to calculate the breaks. You can find the notebook on Qingkai’s Github.
import jenkspy
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-poster')

%matplotlib inline
# Let's generate this 3 classes data
data = np.concatenate([np.random.randint(0, 4, 15), \
  np.random.randint(5, 11, 20), np.random.randint(15, 21, 15)])
# Let's calculate the breaks
breaks = jenkspy.jenks_breaks(data, nb_class=3)
breaks
[0.0, 3.0, 10.0, 20.0]
plt.figure(figsize = (10,8))
hist = plt.hist(data, bins=20, align='left', color='g')
for b in breaks:
    plt.vlines(b, ymin=0, ymax = max(hist[0]))
png
We could see the above figure that the breaks (black lines) are exactly what we expect!

How it works?

The method is an iterative process to repeatedly test different breaks in the dataset to determine which set of breaks has the smallest in-class variance. You can see the above figure that within each group/class, the variance is smallest. But note that if only minimize the in-class variance, if we maximize the out-class variance (that is variance between different groups), the breaks will fall into the middle of the gaps above figure (in this case, it will be 4.5, 12.5, but I didn’t try).

Another example

Let’s have fun and see what the breaks for a normal distribution. (I didn’t find the connection to 3 sigmas as I thought!).
np.random.seed(1)
normal = np.random.normal(loc=0.0, scale=1.0, size=500)
breaks = jenkspy.jenks_breaks(normal, nb_class=5)
breaks
[-2.79308500014654,
 -1.3057269225577375,
 -0.39675352685597737,
 0.386539145133091,
 1.2932258825322618,
 3.0308571123720305]
plt.figure(figsize = (10,8))
hist = plt.hist(normal, bins=20, align='left', color='g')
for b in breaks:
    plt.vlines(b, ymin=0, ymax = max(hist[0]))
png

References

Monday, April 9, 2018

Pandas groupby example

Pandas groupby function is really useful and powerful in many ways. This week, I am going to show some examples of using this groupby functions that I usually use in my analysis. 
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('seaborn-poster')

%matplotlib inline
Let's first create a DataFrame, that contains the name, score and round for some games. 
a = ['Qingkai', 'Ironman', 'Batman', 'Qingkai', 'Ironman', 'Qingkai', 'Batman']
b = [3., 4, 2., 4, 5, 1, 2]
c = range(len(a))

d = [[x,y,z] for x,y,z in zip(a,b,c)]

df = pd.DataFrame(d, columns=['name', 'score', 'round'])
df
Now I want to calculate the mean scores for different users across all the games and the standard deviations. It could be quite simple with Pandas. As I am showing below:
df.groupby('name').mean()
scoreround
name
Batman2.0000004.000000
Ironman4.5000002.500000
Qingkai2.6666672.666667
df.groupby('name').std()
scoreround
name
Batman0.0000002.828427
Ironman0.7071072.121320
Qingkai1.5275252.516611
Or I can loop through the groupby object once to calculate them all. 
for ix, grp in df.groupby('name'):
    print('Name: %s, mean: %.1f, std: %.1f'%(ix, grp['score'].mean(), grp['score'].std()))
Name: Batman, mean: 2.0, std: 0.0
Name: Ironman, mean: 4.5, std: 0.7
Name: Qingkai, mean: 2.7, std: 1.5
But also, we could do it with one liner using the agg function:
df.groupby('name').agg({'score':['mean','std'],'round':'count'})
scoreround
meanstdcount
name
Batman2.0000000.0000002
Ironman4.5000000.7071072
Qingkai2.6666671.5275253
Besides, you can also use some customized functions in the agg as well. For example, if we want to calculate the RMS value of the score, we could do the following:
def cal_RMS(x):
    return np.sqrt(sum(x**2/len(x)))
df.groupby('name').agg({'score':['mean',cal_RMS],'round':'count'})
scoreround
meancal_RMScount
name
Batman2.0000002.0000002
Ironman4.5000004.5276932
Qingkai2.6666672.9439203

Wednesday, March 28, 2018

I have more than 100,000 pageviews today

I just found out that my blog today has more than 100k pageviews in 2-years. To me, it is a thing worth celebrating ^)^

I started to write more blogs two years ago, and use it as a way to keep my fun things happen each week. I found out that most of the time, I will go back to check my blog to find out some information I need. Then I realize that it becomes a habit for me, I will feel something is not done if I miss one blog in this week. This feeling keeps me try to write it regularly. But sometimes when I am traveling or I have some other important things to do, I will try to make it up later ^)^

Anyway, this is the number until today:


Also, the top 5 posts are:


Tuesday, March 27, 2018

Python fitting curves

Recently I have a friend asking me how to fit a function to some observational data using python. Well, it depends on whether you have a function form in mind. If you have one, then it is easy to do that. But even you don’t know the form of the function you want to fit, you can still do it fairly easy. Here are some examples. You can find all the code on Qingkai’s Github.
import numpy as np
import matplotlib.pyplot as plt

%matplotlib inline

plt.style.use('seaborn-poster')

If you can tell the function form from the data

For example, if I have the following data that actually generated from the function 3*exp(-0.05x) + 12, but with some noise. When we see this dataset, we can tell it might be generated from an exponential function.
np.random.seed(42)

x = np.arange(-10, 10, 0.1)

# true data generated by this function
y = 3 * np.exp(-0.05*x) + 12

# adding noise to the true data
y_noise = y + np.random.normal(0, 0.2, size = len(y))
plt.figure(figsize = (10, 8))
plt.plot(x, y_noise, 'o', label = 'Raw data generated')
plt.xlabel('X')
plt.ylabel('y')
plt.legend()
plt.show()
png
Since we have the function form in mind already, let’s fit the data using scipy function - curve_fit
from scipy.optimize import curve_fit
def func(x, a, b, c):
    return a * np.exp(-b * x) + c
popt, pcov = curve_fit(func, x, y_noise)
plt.figure(figsize = (10, 8))

plt.plot(x, y_noise, 'o',
    label='Raw data')

plt.plot(x, func(x, *popt), 'r',
    label='Fit: a=%5.3f, b=%5.3f, c=%5.3f' % tuple(popt))

plt.legend()

plt.xlabel('X')
plt.ylabel('y')

plt.show()
png

More complicated case, you don’t know the funciton form

For a more complicated case that we can not easily guess the form of the function, we could use a Spline to fit the data. For example, we could use UnivariateSpline.
from scipy.interpolate import UnivariateSpline
np.random.seed(42)

x = np.arange(-10, 10, 0.1)

# true data generated by this function
y = 3 * np.exp(-0.05*x) + 12 + 1.4 * np.sin(1.2*x) + 2.1 * np.sin(-2.2*x + 3)

# adding noise to the true data
y_noise = y + np.random.normal(0, 0.5, size = len(y))
plt.figure(figsize = (10, 8))
plt.plot(x, y_noise, 'o', label = 'Raw data generated')
plt.plot(x, y, 'k', label = 'True model')
plt.xlabel('X')
plt.ylabel('y')
plt.legend()
plt.show()
png
# Note, you need play with the s - smoothing factor
s = UnivariateSpline(x, y_noise, s=15)
xs = np.linspace(-10, 10, 100)
ys = s(xs)
plt.figure(figsize = (10, 8))
plt.plot(x, y_noise, 'o', label = 'Raw data')
plt.plot(x, y, 'k', label = 'True model')
plt.plot(xs, ys, 'r', label = 'Fitting')
plt.xlabel('X')
plt.ylabel('y')
plt.legend(loc = 1)
plt.show()
png

Of course, you could also use Machine Learning algorithms

Many machine learning algorithms could do the job as well, you could treat this as a regression problem in machine learning, and train some model to fit the data well. I will show you two methods here - Random forest and ANN. I didn’t do any test here, it is only fitting the data and use my sense to choose the parameters to avoid the model too flexible.

Use Random Forest

from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error
# fit the model and get the estimation for each data points
yfit = RandomForestRegressor(50, random_state=42).fit(x[:, None], y_noise).predict(x[:, None])

plt.figure(figsize = (10,8))
plt.plot(x, y_noise, 'o', label = 'Raw data')
plt.plot(x, y, 'k', label = 'True model')
plt.plot(x, yfit, '-r', label = 'Fitting', zorder = 10)
plt.legend()
plt.xlabel('X')
plt.ylabel('y')

plt.show()
png

Use ANN

from sklearn.neural_network import MLPRegressor
mlp = MLPRegressor(hidden_layer_sizes=(100,100,100), max_iter = 5000, solver='lbfgs', alpha=0.01, activation = 'tanh', random_state = 8)

yfit = mlp.fit(x[:, None], y_noise).predict(x[:, None])

plt.figure(figsize = (10,8))
plt.plot(x, y_noise, 'o', label = 'Raw data')
plt.plot(x, y, 'k', label = 'True model')
plt.plot(x, yfit, '-r', label = 'Fitting', zorder = 10)
plt.legend()

plt.xlabel('X')
plt.ylabel('y')

plt.show()
png