Biostat 257 Homework 3

Due May 14 @ 11:59PM

In [1]:
versioninfo()
Julia Version 1.6.0
Commit f9720dc2eb (2021-03-24 12:55 UTC)
Platform Info:
  OS: macOS (x86_64-apple-darwin19.6.0)
  CPU: Intel(R) Core(TM) i7-6920HQ CPU @ 2.90GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-11.0.1 (ORCJIT, skylake)
Environment:
  JULIA_EDITOR = code
  JULIA_NUM_THREADS = 4

Q1. Big $n$ linear regression

People often think linear regression on a dataset with millions of observations is a big data problem. Now we learnt various methods for solving linear regression and should realize that, with right choice of algorithm, it is a problem that can be handled by any moderate computer.

Q1.1 Download data (10pts)

Download the flight data from https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/HG7NV7. Do not put data files in Git. You will lose points if you do. For grading purpose (reproducibility), we will assume that data files are in a subfolder flights.

In [2]:
;ls -l flights
total 3095984
-rw-r--r--@ 1 huazhou  wheel   12652442 Apr 29 13:24 1987.csv.bz2
-rw-r--r--@ 1 huazhou  wheel   49499025 Apr 29 13:24 1988.csv.bz2
-rw-r--r--@ 1 huazhou  wheel   49202298 Apr 29 13:24 1989.csv.bz2
-rw-r--r--@ 1 huazhou  wheel   52041322 Apr 29 13:24 1990.csv.bz2
-rw-r--r--@ 1 huazhou  wheel   49877448 Apr 29 13:25 1991.csv.bz2
-rw-r--r--@ 1 huazhou  wheel   50040946 Apr 29 13:25 1992.csv.bz2
-rw-r--r--@ 1 huazhou  wheel   50111774 Apr 29 13:25 1993.csv.bz2
-rw-r--r--@ 1 huazhou  wheel   51123887 Apr 29 13:25 1994.csv.bz2
-rw-r--r--@ 1 huazhou  wheel   74881752 Apr 29 13:26 1995.csv.bz2
-rw-r--r--@ 1 huazhou  wheel   75887707 Apr 29 13:26 1996.csv.bz2
-rw-r--r--@ 1 huazhou  wheel   76705687 Apr 29 13:26 1997.csv.bz2
-rw-r--r--@ 1 huazhou  wheel   76683506 Apr 29 13:27 1998.csv.bz2
-rw-r--r--@ 1 huazhou  wheel   79449438 Apr 29 13:27 1999.csv.bz2
-rw-r--r--@ 1 huazhou  wheel   82537924 Apr 29 13:27 2000.csv.bz2
-rw-r--r--@ 1 huazhou  wheel   83478700 Apr 29 13:28 2001.csv.bz2
-rw-r--r--@ 1 huazhou  wheel   75907218 Apr 29 13:29 2002.csv.bz2
-rw-r--r--@ 1 huazhou  wheel   95326801 Apr 29 13:20 2003.csv.bz2
-rw-r--r--@ 1 huazhou  wheel  110825331 Apr 29 13:21 2004.csv.bz2
-rw-r--r--@ 1 huazhou  wheel  112450321 Apr 29 13:21 2005.csv.bz2
-rw-r--r--@ 1 huazhou  wheel  115019195 Apr 29 13:22 2006.csv.bz2
-rw-r--r--@ 1 huazhou  wheel  121249243 Apr 29 13:23 2007.csv.bz2
-rw-r--r--@ 1 huazhou  wheel   39277452 Apr 29 13:23 2008.csv.bz2
-rw-r--r--@ 1 huazhou  wheel     244438 Apr 29 13:24 airports.csv
-rw-r--r--@ 1 huazhou  wheel      43758 Apr 29 13:24 carriers.csv
-rw-r--r--@ 1 huazhou  wheel     428796 Apr 29 13:24 plane-data.csv
-rw-r--r--@ 1 huazhou  wheel       1091 Apr 29 13:24 variable-descriptions.csv

Find out how many data points are in each year.

Q1.2 (10 pts) Problem size

We are interested in how the arrival delay of a flight, ArrDelay, depends on the distance traveled (Distance), departure delay (DepDelay), weekday (DayOfWeek), and airline (UniqueCarrier).

We want to fit a linear regression ArrDelay ~ 1 + Distance + DepDelay + DayOfWeek + UniqueCarrier using data from 1987-2008. Treat DayOfWeek as a factor with 6 levels. We use the dummy coding with 1 (Monday) as the base level. Treat UniqueCarrier as a factor with 8 levels: "AA", "AS", "CO", "DL", "NW", "UA", "US", and "WN". We use the dummy coding with "AA" as the base level.

Will the design matrix $\mathbf{X}$ (in double precision) fit into the memory of your computer?

Q1.3 (30 pts) Choose algorithm

Assume your computer has limited memory, say only 1GB. Review the Summary of Linear Regression and choose one method in the table to solve the linear regression.

Report the estimated regression coefficients $\widehat \beta$, estimated variance $\widehat \sigma^2 = \sum_i (y_i - \widehat y_i)^2 / (n - p)$, and standard errors of $\widehat \beta$.

Hint: It took my laptop about 10-11 minutes to import data and fit linear regression.

Q1.4 Be proud of yourself

Go to your resume/cv and claim you have experience performing analytics on data with 100 million observations.

Sample code

Following code explores the data in 2003 and generates the design matrix and responses for that year. Feel free to use the code in your solution.

In [3]:
using CodecBzip2, CSV, DataFrames, Distributions, LinearAlgebra, 
Serialization, StatsModels, SweepOperator
ENV["COLUMNS"] = 200
Out[3]:
200
In [4]:
# Print first 10 lines of 2003 data.
run(pipeline(`bunzip2 -c flights/2003.csv.bz2`, `head`))
Year,Month,DayofMonth,DayOfWeek,DepTime,CRSDepTime,ArrTime,CRSArrTime,UniqueCarrier,FlightNum,TailNum,ActualElapsedTime,CRSElapsedTime,AirTime,ArrDelay,DepDelay,Origin,Dest,Distance,TaxiIn,TaxiOut,Cancelled,CancellationCode,Diverted,CarrierDelay,WeatherDelay,NASDelay,SecurityDelay,LateAircraftDelay
2003,1,29,3,1651,1655,1912,1913,UA,1017,N202UA,141,138,119,-1,-4,ORD,MSY,837,5,17,0,NA,0,NA,NA,NA,NA,NA
2003,1,30,4,1654,1655,1910,1913,UA,1017,N311UA,136,138,108,-3,-1,ORD,MSY,837,2,26,0,NA,0,NA,NA,NA,NA,NA
2003,1,31,5,1724,1655,1936,1913,UA,1017,N317UA,132,138,110,23,29,ORD,MSY,837,5,17,0,NA,0,NA,NA,NA,NA,NA
2003,1,1,3,1033,1035,1625,1634,UA,1018,N409UA,232,239,215,-9,-2,OAK,ORD,1835,6,11,0,NA,0,NA,NA,NA,NA,NA
2003,1,2,4,1053,1035,1726,1634,UA,1018,N496UA,273,239,214,52,18,OAK,ORD,1835,13,46,0,NA,0,NA,NA,NA,NA,NA
2003,1,3,5,1031,1035,1640,1634,UA,1018,N412UA,249,239,223,6,-4,OAK,ORD,1835,13,13,0,NA,0,NA,NA,NA,NA,NA
2003,1,4,6,1031,1035,1626,1634,UA,1018,N455UA,235,239,219,-8,-4,OAK,ORD,1835,5,11,0,NA,0,NA,NA,NA,NA,NA
2003,1,5,7,1035,1035,1636,1634,UA,1018,N828UA,241,239,227,2,0,OAK,ORD,1835,5,9,0,NA,0,NA,NA,NA,NA,NA
2003,1,6,1,1031,1035,1653,1634,UA,1018,N453UA,262,239,241,19,-4,OAK,ORD,1835,7,14,0,NA,0,NA,NA,NA,NA,NA
Out[4]:
Base.ProcessChain(Base.Process[Process(`bunzip2 -c flights/2003.csv.bz2`, ProcessSignaled(13)), Process(`head`, ProcessExited(0))], Base.DevNull(), Base.DevNull(), Base.DevNull())
In [5]:
# how many data points in 2003?
open("flights/2003.csv.bz2", "r") do io
    countlines(Bzip2DecompressorStream(io))
end
Out[5]:
6488541
In [6]:
# # figure out which airlines appear in each year of 1987-2008
# airlines = Vector{Vector{String}}(undef, 22)
# @time for year in 1987:2008
#     println("year $year")
#     filename = "flights/" * string(year) * ".csv.bz2"
#     df = open(filename, "r") do io
#         CSV.File(
#             Bzip2DecompressorStream(io),
#             select = ["UniqueCarrier"],
#             types = Dict("UniqueCarrier" => String),
#             missingstring = "NA"
#         ) |> DataFrame
#     end
#     airlines[year - 1986] = unique(df[!, :UniqueCarrier])
# end
# intersect(airlines...) |> sort
In [7]:
# load 2003 data into DataFrame
@time df = open("flights/2003.csv.bz2", "r") do io
    CSV.File(
        Bzip2DecompressorStream(io), 
        select = ["DayOfWeek", "UniqueCarrier", "ArrDelay", 
            "DepDelay", "Distance"],
        types = Dict(
            "DayOfWeek" => UInt8,
            "UniqueCarrier" => String, 
            "ArrDelay" => Float64, 
            "DepDelay" => Float64, 
            "Distance" => UInt16
            ),
        missingstring = "NA"
        ) |> DataFrame
end
 32.675195 seconds (39.79 M allocations: 4.854 GiB, 2.26% gc time, 24.62% compilation time)
Out[7]:

6,488,540 rows × 5 columns

DayOfWeekUniqueCarrierArrDelayDepDelayDistance
UInt8StringFloat64?Float64?UInt16
13UA-1.0-4.0837
24UA-3.0-1.0837
35UA23.029.0837
43UA-9.0-2.01835
54UA52.018.01835
65UA6.0-4.01835
76UA-8.0-4.01835
87UA2.00.01835
91UA19.0-4.01835
103UA4.03.0413
114UA-23.0-4.0413
125UA-19.0-3.0413
136UA-12.00.0413
147UA64.082.0413
151UA-4.00.0413
162UA-8.02.0413
173UA-21.0-4.0413
184UA-27.0-4.0413
195UA-16.0-3.0413
206UA-16.0-2.0413
217UA-24.0-6.0413
221UA-12.04.0413
232UA-11.0-3.0413
243UA-9.0-1.0413
254UA-10.01.0413
265UA-10.0-5.0413
276UA-23.0-4.0413
287UA-13.0-8.0413
291UA-25.0-1.0413
302UA-20.0-5.0413
In [8]:
# how many rows?
size(df, 1)
Out[8]:
6488540
In [9]:
# drop rows with missing values
dropmissing!(df)
size(df, 1)
Out[9]:
6375689
In [10]:
# filter out rows not in the airline list
airlines = ["AA", "AS", "CO", "DL", "NW", "UA", "US", "WN"]
filter!(row -> row[:UniqueCarrier]  airlines, df)
size(df, 1)
Out[10]:
4230285
In [11]:
# model frame for year 2003
mf = ModelFrame(
    @formula(ArrDelay ~ 1 + DayOfWeek + Distance + DepDelay + UniqueCarrier), 
    df,
    contrasts = Dict(
        :DayOfWeek => StatsModels.DummyCoding(base = 1, levels = UInt8.(1:7)),
        :UniqueCarrier => StatsModels.DummyCoding(
            base = "AA", 
            levels = ["AA", "AS", "CO", "DL", "NW", "UA", "US", "WN"]
        )
    )
)
Out[11]:
ModelFrame{NamedTuple{(:ArrDelay, :DayOfWeek, :Distance, :DepDelay, :UniqueCarrier), Tuple{Vector{Float64}, Vector{UInt8}, Vector{UInt16}, Vector{Float64}, Vector{String}}}, StatisticalModel}(ArrDelay ~ 1 + DayOfWeek + Distance + DepDelay + UniqueCarrier, StatsModels.Schema with 5 entries:
  DayOfWeek => DayOfWeek
  Distance => Distance
  UniqueCarrier => UniqueCarrier
  ArrDelay => ArrDelay
  DepDelay => DepDelay
, (ArrDelay = [-1.0, -3.0, 23.0, -9.0, 52.0, 6.0, -8.0, 2.0, 19.0, 4.0  …  62.0, 66.0, 27.0, 134.0, 53.0, 47.0, 54.0, -5.0, 3.0, -1.0], DayOfWeek = UInt8[0x03, 0x04, 0x05, 0x03, 0x04, 0x05, 0x06, 0x07, 0x01, 0x03  …  0x05, 0x05, 0x05, 0x05, 0x05, 0x05, 0x05, 0x05, 0x05, 0x05], Distance = UInt16[0x0345, 0x0345, 0x0345, 0x072b, 0x072b, 0x072b, 0x072b, 0x072b, 0x072b, 0x019d  …  0x0245, 0x0763, 0x068e, 0x03b2, 0x032d, 0x01b0, 0x01b0, 0x01c5, 0x02b1, 0x0789], DepDelay = [-4.0, -1.0, 29.0, -2.0, 18.0, -4.0, -4.0, 0.0, -4.0, 3.0  …  29.0, 39.0, 26.0, 114.0, 44.0, 16.0, 50.0, -3.0, 3.0, -1.0], UniqueCarrier = ["UA", "UA", "UA", "UA", "UA", "UA", "UA", "UA", "UA", "UA"  …  "DL", "DL", "DL", "DL", "DL", "DL", "DL", "DL", "DL", "DL"]), StatisticalModel)
In [12]:
# generate the covariate matrix X for year 2003
X = modelmatrix(mf)
Out[12]:
4230285×16 Matrix{Float64}:
 1.0  0.0  1.0  0.0  0.0  0.0  0.0   837.0   -4.0  0.0  0.0  0.0  0.0  1.0  0.0  0.0
 1.0  0.0  0.0  1.0  0.0  0.0  0.0   837.0   -1.0  0.0  0.0  0.0  0.0  1.0  0.0  0.0
 1.0  0.0  0.0  0.0  1.0  0.0  0.0   837.0   29.0  0.0  0.0  0.0  0.0  1.0  0.0  0.0
 1.0  0.0  1.0  0.0  0.0  0.0  0.0  1835.0   -2.0  0.0  0.0  0.0  0.0  1.0  0.0  0.0
 1.0  0.0  0.0  1.0  0.0  0.0  0.0  1835.0   18.0  0.0  0.0  0.0  0.0  1.0  0.0  0.0
 1.0  0.0  0.0  0.0  1.0  0.0  0.0  1835.0   -4.0  0.0  0.0  0.0  0.0  1.0  0.0  0.0
 1.0  0.0  0.0  0.0  0.0  1.0  0.0  1835.0   -4.0  0.0  0.0  0.0  0.0  1.0  0.0  0.0
 1.0  0.0  0.0  0.0  0.0  0.0  1.0  1835.0    0.0  0.0  0.0  0.0  0.0  1.0  0.0  0.0
 1.0  0.0  0.0  0.0  0.0  0.0  0.0  1835.0   -4.0  0.0  0.0  0.0  0.0  1.0  0.0  0.0
 1.0  0.0  1.0  0.0  0.0  0.0  0.0   413.0    3.0  0.0  0.0  0.0  0.0  1.0  0.0  0.0
 1.0  0.0  0.0  1.0  0.0  0.0  0.0   413.0   -4.0  0.0  0.0  0.0  0.0  1.0  0.0  0.0
 1.0  0.0  0.0  0.0  1.0  0.0  0.0   413.0   -3.0  0.0  0.0  0.0  0.0  1.0  0.0  0.0
 1.0  0.0  0.0  0.0  0.0  1.0  0.0   413.0    0.0  0.0  0.0  0.0  0.0  1.0  0.0  0.0
 ⋮                        ⋮                             ⋮                        ⋮
 1.0  0.0  0.0  0.0  1.0  0.0  0.0   406.0  104.0  0.0  0.0  1.0  0.0  0.0  0.0  0.0
 1.0  0.0  0.0  0.0  1.0  0.0  0.0  1891.0   70.0  0.0  0.0  1.0  0.0  0.0  0.0  0.0
 1.0  0.0  0.0  0.0  1.0  0.0  0.0   581.0   29.0  0.0  0.0  1.0  0.0  0.0  0.0  0.0
 1.0  0.0  0.0  0.0  1.0  0.0  0.0  1891.0   39.0  0.0  0.0  1.0  0.0  0.0  0.0  0.0
 1.0  0.0  0.0  0.0  1.0  0.0  0.0  1678.0   26.0  0.0  0.0  1.0  0.0  0.0  0.0  0.0
 1.0  0.0  0.0  0.0  1.0  0.0  0.0   946.0  114.0  0.0  0.0  1.0  0.0  0.0  0.0  0.0
 1.0  0.0  0.0  0.0  1.0  0.0  0.0   813.0   44.0  0.0  0.0  1.0  0.0  0.0  0.0  0.0
 1.0  0.0  0.0  0.0  1.0  0.0  0.0   432.0   16.0  0.0  0.0  1.0  0.0  0.0  0.0  0.0
 1.0  0.0  0.0  0.0  1.0  0.0  0.0   432.0   50.0  0.0  0.0  1.0  0.0  0.0  0.0  0.0
 1.0  0.0  0.0  0.0  1.0  0.0  0.0   453.0   -3.0  0.0  0.0  1.0  0.0  0.0  0.0  0.0
 1.0  0.0  0.0  0.0  1.0  0.0  0.0   689.0    3.0  0.0  0.0  1.0  0.0  0.0  0.0  0.0
 1.0  0.0  0.0  0.0  1.0  0.0  0.0  1929.0   -1.0  0.0  0.0  1.0  0.0  0.0  0.0  0.0
In [13]:
# generate the response vector Y for year 2003
y = df[!, :ArrDelay]
Out[13]:
4230285-element Vector{Float64}:
  -1.0
  -3.0
  23.0
  -9.0
  52.0
   6.0
  -8.0
   2.0
  19.0
   4.0
 -23.0
 -19.0
 -12.0
   ⋮
 108.0
  66.0
  62.0
  66.0
  27.0
 134.0
  53.0
  47.0
  54.0
  -5.0
   3.0
  -1.0

Q2. PageRank

We are going to try different numerical methods learnt in class on the Google PageRank problem.

Q2.1 (5 pts) Recognize structure

Let $\mathbf{A} \in \{0,1\}^{n \times n}$ be the connectivity matrix of $n$ web pages with entries $$ \begin{eqnarray*} a_{ij}= \begin{cases} 1 & \text{if page $i$ links to page $j$} \\ 0 & \text{otherwise} \end{cases}. \end{eqnarray*} $$ $r_i = \sum_j a_{ij}$ is the out-degree of page $i$. That is $r_i$ is the number of links on page $i$. Imagine a random surfer exploring the space of $n$ pages according to the following rules.

  • From a page $i$ with $r_i>0$
    • with probability $p$, (s)he randomly chooses a link on page $i$ (uniformly) and follows that link to the next page
    • with probability $1-p$, (s)he randomly chooses one page from the set of all $n$ pages (uniformly) and proceeds to that page
  • From a page $i$ with $r_i=0$ (a dangling page), (s)he randomly chooses one page from the set of all $n$ pages (uniformly) and proceeds to that page

The process defines a Markov chain on the space of $n$ pages. Write the transition matrix $\mathbf{P}$ of the Markov chain as a sparse matrix plus rank 1 matrix.

Q2.2 Relate to numerical linear algebra

According to standard Markov chain theory, the (random) position of the surfer converges to the stationary distribution $\mathbf{x} = (x_1,\ldots,x_n)^T$ of the Markov chain. $x_i$ has the natural interpretation of the proportion of times the surfer visits page $i$ in the long run. Therefore $\mathbf{x}$ serves as page ranks: a higher $x_i$ means page $i$ is more visited. It is well-known that $\mathbf{x}$ is the left eigenvector corresponding to the top eigenvalue 1 of the transition matrix $\mathbf{P}$. That is $\mathbf{P}^T \mathbf{x} = \mathbf{x}$. Therefore $\mathbf{x}$ can be solved as an eigen-problem. It can also be cast as solving a linear system. Since the row sums of $\mathbf{P}$ are 1, $\mathbf{P}$ is rank deficient. We can replace the first equation by the $\sum_{i=1}^n x_i = 1$.

Hint: For iterative solvers, we don't need to replace the 1st equation. We can use the matrix $\mathbf{I} - \mathbf{P}^T$ directly if we start with a vector with all positive entries.

Q2.3 (10 pts) Explore data

Obtain the connectivity matrix A from the SNAP/web-Google data in the MatrixDepot package.

In [14]:
using MatrixDepot

md = mdopen("SNAP/web-Google")
# display documentation for the SNAP/web-Google data
mdinfo(md)
include group.jl for user defined matrix generators
verify download of index files...
reading database
adding metadata...
adding svd data...
writing database
used remote sites are sparse.tamu.edu with MAT index and math.nist.gov with HTML index
Out[14]:

SNAP/web-Google

MatrixMarket matrix coordinate pattern general


  • notes:
  • Networks from SNAP (Stanford Network Analysis Platform) Network Data Sets,
  • Jure Leskovec http://snap.stanford.edu/data/index.html
  • email jure at cs.stanford.edu
  • Google web graph
  • Dataset information
  • Nodes represent web pages and directed edges represent hyperlinks between them.
  • The data was released in 2002 by Google as a part of Google Programming
  • Contest.
  • Dataset statistics
  • Nodes 875713
  • Edges 5105039
  • Nodes in largest WCC 855802 (0.977)
  • Edges in largest WCC 5066842 (0.993)
  • Nodes in largest SCC 434818 (0.497)
  • Edges in largest SCC 3419124 (0.670)
  • Average clustering coefficient 0.6047
  • Number of triangles 13391903
  • Fraction of closed triangles 0.05523
  • Diameter (longest shortest path) 22
  • 90-percentile effective diameter 8.1
  • Source (citation)
  • J. Leskovec, K. Lang, A. Dasgupta, M. Mahoney. Community Structure in Large
  • Networks: Natural Cluster Sizes and the Absence of Large Well-Defined Clusters.
  • arXiv.org:0810.1355, 2008.
  • Google programming contest, 2002
  • http://www.google.com/programming-contest/
  • Files
  • File Description
  • web-Google.txt.gz Webgraph from the Google programming contest, 2002

916428 916428 5105039

In [15]:
# connectivity matrix
A = md.A
Out[15]:
916428×916428 SparseArrays.SparseMatrixCSC{Bool, Int64} with 5105039 stored entries:
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿

Compute summary statistics:

  • How much memory does A take? If converted to a Matrix{Float64} (don't do it!), how much memory will it take?
  • number of web pages
  • number of edges (web links).
  • number of dangling nodes (pages with no out links)
  • histogram of in-degrees
  • list the top 20 pages with the largest in-degrees?
  • histogram of out-degrees
  • which the top 20 pages with the largest out-degrees?
  • visualize the sparsity pattern of $\mathbf{A}$ or a submatrix of $\mathbf{A}$ say A[1:10000, 1:10000].

Hint: For plots, you can use the UnicodePlots.jl package (spy, histogram, etc), which is fast for large data.

Q2.4 (5 pts) Dense linear algebra?

Consider the following methods to obtain the page ranks of the SNAP/web-Google data.

  1. A dense linear system solver such as LU decomposition.
  2. A dense eigen-solver for asymmetric matrix.

For the LU approach, estimate (1) the memory usage and (2) how long it will take assuming that the LAPACK functions can achieve the theoretical throughput of your computer.

Q2.5 Iterative solvers

Set the teleportation parameter at $p = 0.85$. Consider the following methods for solving the PageRank problem.

  1. An iterative linear system solver such as GMRES.
  2. An iterative eigen-solver such as Arnoldi method.

For iterative methods, we have many choices in Julia. See a list of existing Julia packages for linear solvers at this page. The start-up code below uses the KrylovKit.jl package. You can use other packages if you prefer. Make sure to utilize the special structure of $\mathbf{P}$ (sparse + rank 1) to speed up the matrix-vector multiplication.

Step 1 (15 pts)

Let's implement a type PageRankImPt that mimics the matrix $\mathbf{M} = \mathbf{I} - \mathbf{P}^T$. For iterative methods, all we need to provide are methods for evaluating $\mathbf{M} \mathbf{v}$ and $\mathbf{M}^T \mathbf{v}$ for arbitrary vector $\mathbf{v}$.

In [16]:
using BenchmarkTools, LinearAlgebra, SparseArrays, Revise

# a type for the matrix M = I - P^T in PageRank problem
struct PageRankImPt{TA <: Number, IA <: Integer, T <: AbstractFloat} <: AbstractMatrix{T}
    A         :: SparseMatrixCSC{TA, IA} # adjacency matrix
    telep     :: T
    # working arrays
    # TODO: whatever intermediate arrays you may want to pre-allocate
end

# constructor
function PageRankImPt(A::SparseMatrixCSC, telep::T) where T <: AbstractFloat
    n = size(A, 1)
    # TODO: initialize and pre-allocate arrays
    PageRankImPt(A, telep)
end

LinearAlgebra.issymmetric(::PageRankImPt) = false
Base.size(M::PageRankImPt) = size(M.A)
# TODO: implement this function for evaluating M[i, j]
Base.getindex(M::PageRankImPt, i, j) = M.telep

# overwrite `out` by `(I - Pt) * v`
function LinearAlgebra.mul!(
        out :: Vector{T}, 
        M   :: PageRankImPt{<:Number, <:Integer, T}, 
        v   :: Vector{T}) where T <: AbstractFloat
    # TODO: implement mul!(out, M, v)
    sleep(1e-2) # wait 10 ms as if your code takes 1ms
    return out
end

# overwrite `out` by `(I - P) * v`
function LinearAlgebra.mul!(
        out :: Vector{T}, 
        Mt  :: Transpose{T, PageRankImPt{TA, IA, T}}, 
        v   :: Vector{T}) where {TA<:Number, IA<:Integer, T <: AbstractFloat}
    M = Mt.parent
    # TODO: implement mul!(out, transpose(M), v)
    sleep(1e-2) # wait 10 ms as if your code takes 1ms
    out
end

To check correctness. Note $$ \mathbf{M}^T \mathbf{1} = \mathbf{0} $$ and $$ \mathbf{M} \mathbf{x} = \mathbf{0} $$ for stationary distribution $\mathbf{x}$.

Download the solution file pgrksol.csv.gz. Do not put this file in your Git. You will lose points if you do. You can add a line pgrksol.csv.gz to your .gitignore file.

In [17]:
using CodecZlib, DelimitedFiles

isfile("pgrksol.csv.gz") || download("https://raw.githubusercontent.com/ucla-biostat-257-2021spring/ucla-biostat-257-2021spring.github.io/master/hw/hw3/pgrksol.csv.gz")
xsol = open("pgrksol.csv.gz", "r") do io
    vec(readdlm(GzipDecompressorStream(io)))
end
Out[17]:
916428-element Vector{Float64}:
 3.3783428216975054e-5
 2.0710155392568165e-6
 3.663065984832893e-6
 7.527510785028837e-7
 8.63328599674051e-7
 1.769418252415541e-6
 2.431230382883396e-7
 6.368417180141445e-7
 4.744973703681939e-7
 2.6895486110647536e-7
 3.18574314847409e-6
 7.375106374416742e-7
 2.431230382883396e-7
 ⋮
 1.1305006040148547e-6
 4.874825281822915e-6
 3.167946973112519e-6
 9.72688040308568e-7
 6.588614479285245e-7
 7.737011774300648e-7
 2.431230382883396e-7
 1.6219204214797293e-6
 3.912130060551738e-7
 2.431230382883396e-7
 7.296033831163157e-6
 6.330939996912478e-7

You will lose all 35 points (Steps 1 and 2) if the following statements throw AssertError.

In [18]:
M = PageRankImPt(A, 0.85)
n = size(M, 1)

@assert transpose(M) * ones(n)  zeros(n)
In [19]:
@assert M * xsol  zeros(n)

Step 2 (20 pts)

We want to benchmark the hot functions mul! to make sure they are efficient and allocate little memory.

In [20]:
M = PageRankImPt(A, 0.85)
n = size(M, 1)
v, out = ones(n), zeros(n)
bm_mv = @benchmark mul!($out, $M, $v) setup=(fill!(out, 0); fill!(v, 1))
Out[20]:
BenchmarkTools.Trial: 
  memory estimate:  144 bytes
  allocs estimate:  5
  --------------
  minimum time:     10.274 ms (0.00% GC)
  median time:      12.783 ms (0.00% GC)
  mean time:        12.790 ms (0.00% GC)
  maximum time:     14.319 ms (0.00% GC)
  --------------
  samples:          355
  evals/sample:     1
In [21]:
bm_mtv = @benchmark mul!($out, $(transpose(M)), $v) setup=(fill!(out, 0); fill!(v, 1))
Out[21]:
BenchmarkTools.Trial: 
  memory estimate:  144 bytes
  allocs estimate:  5
  --------------
  minimum time:     10.146 ms (0.00% GC)
  median time:      12.720 ms (0.00% GC)
  mean time:        12.644 ms (0.00% GC)
  maximum time:     14.118 ms (0.00% GC)
  --------------
  samples:          359
  evals/sample:     1

You will lose 1 point for each 100 bytes memory allocation. So the points you will get is

In [22]:
clamp(10 - median(bm_mv).memory / 100, 0, 10) + 
clamp(10 - median(bm_mtv).memory / 100, 0, 10)
Out[22]:
17.12

Hint: My median run times are 30-40 ms and memory allocations are 0 bytes.

Step 3 (20 pts)

Let's first try to solve the PageRank problem by the GMRES method for solving linear equations.

In [23]:
using KrylovKit

# normalize in-degrees to be the start point
x0   = vec(sum(A, dims = 1)) .+ 1.0
x0 ./= sum(x0)

# right hand side
b = zeros(n)

# warm up (compilation)
linsolve(M, b, x0, issymmetric = false, isposdef = false, maxiter = 1) 
# output is complex eigenvalue/eigenvector
(x_gmres, info), time_gmres, = @timed linsolve(M, b, x0, issymmetric = false, isposdef = false)
Out[23]:
(value = ([3.5373439728225696e-5, 1.1625074089088258e-6, 7.4732619144138796e-6, 6.642899479479004e-7, 9.964349219218506e-7, 2.6571597917916015e-6, 1.660724869869751e-7, 6.642899479479004e-7, 3.321449739739502e-7, 3.321449739739502e-7  …  2.989304765765552e-6, 1.3285798958958008e-6, 2.4910873048046265e-6, 4.982174609609253e-7, 1.660724869869751e-7, 2.4910873048046265e-6, 3.321449739739502e-7, 1.660724869869751e-7, 7.4732619144138796e-6, 4.982174609609253e-7], ConvergenceInfo: one converged value after 0 iterations and 1 applications of the linear map;
norms of residuals are given by (0.0,).
), time = 0.068877865, bytes = 27636729, gctime = 0.017346028, gcstats = Base.GC_Diff(27636729, 3, 0, 79377, 3, 0, 17346028, 1, 0))

Check correctness. You will lose all 20 points if the following statement throws AssertError.

In [24]:
@assert norm(x_gmres - xsol) < 1e-8
AssertionError: norm(x_gmres - xsol) < 1.0e-8

Stacktrace:
 [1] top-level scope
   @ In[24]:1
 [2] eval
   @ ./boot.jl:360 [inlined]
 [3] include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)
   @ Base ./loading.jl:1094

GMRES should be reasonably fast. The points you'll get is

In [25]:
clamp(20 / time_gmres * 20, 0, 20)
Out[25]:
20.0

Hint: My runtime is about 7-8 seconds.

Step 4 (20 pts)

Let's first try to solve the PageRank problem by the Arnoldi method for solving eigen problems.

In [26]:
# warm up (compilation)
eigsolve(M, x0, 1, :SR, issymmetric = false, maxiter = 1)
# output is complex eigenvalue/eigenvector
(vals, vecs, info), time_arnoldi, = @timed eigsolve(M, x0, 1, :SR, issymmetric = false)
Out[26]:
(value = ([0.0], [[0.0057161240427034184, 0.0001878538417789856, 0.0012076318400077645, 0.00010734505244513462, 0.00016101757866770192, 0.00042938020978053847, 2.6836263111283654e-5, 0.00010734505244513462, 5.367252622256731e-5, 5.367252622256731e-5  …  0.00048305273600310583, 0.00021469010489026923, 0.00040254394666925487, 8.050878933385096e-5, 2.6836263111283654e-5, 0.00040254394666925487, 5.367252622256731e-5, 2.6836263111283654e-5, 0.0012076318400077645, 8.050878933385096e-5]], ConvergenceInfo: one converged value after 1 iterations and 1 applications of the linear map;
norms of residuals are given by (0.0,).
), time = 0.089409929, bytes = 34596553, gctime = 0.0152803, gcstats = Base.GC_Diff(34596553, 4, 0, 79217, 16, 0, 15280300, 1, 0))

Check correctness. You will lose all 20 points if the following statement throws AssertError.

In [27]:
@assert abs(Real(vals[1])) < 1e-8
In [28]:
x_arnoldi   = abs.(Real.(vecs[1]))
x_arnoldi ./= sum(x_arnoldi)
@assert norm(x_arnoldi - xsol) < 1e-8
AssertionError: norm(x_arnoldi - xsol) < 1.0e-8

Stacktrace:
 [1] top-level scope
   @ In[28]:3
 [2] eval
   @ ./boot.jl:360 [inlined]
 [3] include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)
   @ Base ./loading.jl:1094

Arnoldi should be reasonably fast. The points you'll get is

In [29]:
clamp(20 / time_arnoldi * 20, 0, 20)
Out[29]:
20.0

Hint: My runtime is about 11-12 seconds.

Q2.6 (5 pts) Results

List the top 20 pages you found and their corresponding PageRank score. Do they match the top 20 pages ranked according to in-degrees?

Q2.7 Be proud of yourself

Go to your resume/cv and claim you have experience performing analysis on a network of one million nodes.