Socrata was acquired by Tyler Technologies in 2018 and is now the Data and Insights division of Tyler. The platform is still powered by the same software formerly known as Socrata but you will see references to Data & Insights going forward.

Data Analysis with Python and pandas using Jupyter Notebook

This guide describes how to use pandas and Jupyter notebook to analyze a Socrata dataset. It will cover how to do basic analysis of a dataset using pandas functions and how to transform a dataset by mapping functions.

Contents

  1. Installing Python and Jupyter
  2. Importing a Dataset Into Jupyter
  3. Basic Analysis of a Dataset
  4. Mapping Functions to Transform Data

Installing Python and Jupyter

You can run Jupyter notebook in the cloud using a service like try.jupyter.org or you can install and run it locally. You will need Python version 3.3+ or 2.7+.

Import a Dataset Into Jupyter

Before we import our sample dataset into the notebook we will import the pandas library. pandas is an open source Python library that provides “high-performance, easy-to-use data structures and data analysis tools.”

importpandasaspdprint(pd.__version__)>0.17.1

Next, we will read the following dataset from the Open San Mateo County site: https://data.smcgov.org/Government/Educational-Attainment/mb6a-xn89

pandas provides several methods for reading data in different formats. Here we’ll read it in as JSON but you can read in CSV and Excel files as well.

Note that you can get the help for any method by adding a “?” to the end and running the cell. For example:

pd.read_json?

The data is returned as a “DataFrame” which is a 2 dimensional spreadsheet-like data structure with columns of different types. pandas has two main data structures - DataFrame and Series. A Series is a one-dimensional array that can hold any value type - This is not necessarily the case but a DataFrame column may be treated as a Series.

Displayed below are the first 5 rows of the DataFrame we imported (to see the last n rows use .tail(n)).

df=pd.read_json("https://data.smcgov.org/resource/mb6a-xn89.json")df.head(5)
 bachelor_s_degree_or_highergeographygeography_typehigh_school_graduateless_than_high_school_graduatelocation_1some_college_or_associate_s_degreeyear
03.5AthertonTown12.313.6{‘latitude’: ‘37.458611’, ‘needs_recoding’: Fa…2.72014-01-01T00:00:00
13.6BelmontCity5.920.9{‘latitude’: ‘37.518056’, ‘needs_recoding’: Fa…5.02014-01-01T00:00:00
23.3BrisbaneCity1.70.0{‘latitude’: ‘37.680833’, ‘needs_recoding’: Fa…11.52014-01-01T00:00:00
32.4BroadmoorCDP8.021.2{‘latitude’: ‘37.6925’, ‘needs_recoding’: Fals…3.02014-01-01T00:00:00
43.7BurlingameCity8.320.1{‘latitude’: ‘37.583333’, ‘needs_recoding’: Fa…7.42014-01-01T00:00:00

Basic Analysis of Dataset

pandas has several methods that allow you to quickly analyze a dataset and get an idea of the type and amount of data you are dealing with along with some important statistics.

  • .shape - returns the row and column count of a dataset
  • .describe() - returns statistics about the numerical columns in a dataset
  • .dtypes returns the data type of each column
df.shape(32,8)df.describe()
 bachelor_s_degree_or_higherhigh_school_graduateless_than_high_school_graduatesome_college_or_associate_s_degree
count32.00000032.00000032.0000032.000000
mean2.8562506.46250017.800005.946875
std1.8739194.69390519.299444.728430
min0.0000000.0000000.000000.000000
25%2.1000001.9250006.825002.525000
50%3.0000007.75000013.900005.500000
75%3.6000009.45000020.975008.800000
max9.10000016.400000100.0000018.500000

You can also run the .describe method with the “include=’all’” flag to get statistics on the non-numeric column types. In this example we have to drop the “location_1” column because the .describe method doesn’t accept dictionary objects.

df.drop("geolocation",axis=1).describe(include="all")
 bachelor_s_degree_or_highergeographygeography_typehigh_school_graduateless_than_high_school_graduatesome_college_or_associate_s_degreeyear
count32.000000323232.00000032.0000032.00000032
uniqueNaN323NaNNaNNaN1
topNaNMillbraeCityNaNNaNNaN2014-01-01T00:00:00
freqNaN115NaNNaNNaN32
mean2.856250NaNNaN6.46250017.800005.946875NaN
std1.873919NaNNaN4.69390519.299444.728430NaN
min0.000000NaNNaN0.0000000.000000.000000NaN
25%2.100000NaNNaN1.9250006.825002.525000NaN
50%3.000000NaNNaN7.75000013.900005.500000NaN
75%3.600000NaNNaN9.45000020.975008.800000NaN
max9.100000NaNNaN16.400000100.0000018.500000NaN
df.dtypesbachelor_s_degree_or_higherfloat64geographyobjectgeography_typeobjecthigh_school_graduatefloat64less_than_high_school_graduatefloat64location_1objectsome_college_or_associate_s_degreefloat64yearobjectdtype:object

Here are some additional methods that can give you statistics of a DataFrame or particular column in a DataFrame.

  • .mean(axis=0 [will give you the calculated value per column]) - returns the statistical mean
  • .median(axis=0 [will give you the calculated value per column]) - returns the statistical median
  • .mode(axis=0 [will give you the calculated value per column]) - returns the statistical mode
  • .count() - gives number of total values in column
  • .unique() - returns array of all unique values in that column
  • .value_counts() - returns object containing counts of unique values
df.bachelor_s_degree_or_higher.mean()2.8562499999999997df.geography.count()32df.geography_type.unique()array(['Town','City','CDP'],dtype=object)df.less_than_high_school_graduate.value_counts()0.0416.419.51100.017.018.517.7144.4131.114.8113.413.3122.1120.916.3121.2111.9115.7114.2116.1126.7118.3148.1137.8113.6120.1115.116.219.21Name:less_than_high_school_graduate,dtype:int64

Mapping Functions to Transform Data

Often times we need to apply a function to a column in a dataset to transform it. pandas makes it easy to do with the .apply() method. In this example, we will map the values in the “geography_type” column to either a “1” or “0” depending on the value. We will append this information to the DataFrame in a new column.

defmapGeography(x):ifx=="City":return1else:return0df['geography_mapped_value']=df.geography_type.apply(mapGeography)df.geography_mapped_value.value_counts()017115Name:geography_mapped_value,dtype:int64

We could have also accomplished the same thing in a lambda function in the following way

df['geography_mapped_value_lambda']=df.geography_type.apply(lambday:1ify=="City"else0)df.geography_mapped_value_lambda.value_counts()017115Name:geography_mapped_value_lambda,dtype:int64

This example is also available as a downloadable Jupyter Notebook.


close