Title: | Dynamic Graphical Models |
---|---|
Description: | Dynamic graphical models for multivariate time series data to estimate directed dynamic networks in functional magnetic resonance imaging (fMRI), see Schwab et al. (2017) <doi:10.1016/j.neuroimage.2018.03.074>. |
Authors: | Simon Schwab <[email protected]>, Ruth Harbord <[email protected]>, Lilia Costa <[email protected]>, Thomas Nichols <[email protected]> |
Maintainer: | Simon Schwab <[email protected]> |
License: | GPL-3 |
Version: | 1.7.4 |
Built: | 2024-10-08 04:47:38 UTC |
Source: | https://github.com/schw4b/dgm |
Performes a binomial test with FDR correction for network edge occurrence.
binom.nettest(adj, alter = "two.sided", fdr = 0.05)
binom.nettest(adj, alter = "two.sided", fdr = 0.05)
adj |
adjacency matrix, nodes x nodes x subj, or nodes x nodes x runs x subj. |
alter |
type of binomial test, "two.sided" (default), "less", or "greater" |
fdr |
false discovery rate (FDR) control, default is 0.05. |
store list with results.
# Generate some sample binary 5-node network structures for N=20, then perform # significance testing. N=20 x = rmdiag(array(rbinom(n=5*5*N, size=1, prob=0.10), dim=c(5,5,N))) x[1,2,2:N]=1; x[2,3,seq(1,N,2)]=1 # add some consitent edges A = apply(x, c(1,2), mean) l = binom.nettest(x)
# Generate some sample binary 5-node network structures for N=20, then perform # significance testing. N=20 x = rmdiag(array(rbinom(n=5*5*N, size=1, prob=0.10), dim=c(5,5,N))) x[1,2,2:N]=1; x[2,3,seq(1,N,2)]=1 # add some consitent edges A = apply(x, c(1,2), mean) l = binom.nettest(x)
Mean centers timeseries in a 2D array timeseries x nodes, i.e. each timeseries of each node has mean of zero.
center(X)
center(X)
X |
2D array with dimensions timeseries x nodes. |
M 2D array.
data("utestdata") myts=center(myts)
data("utestdata") myts=center(myts)
Threshold correlation matrix to match a given number of edges.
cor2adj(R, n)
cor2adj(R, n)
R |
correlation matrix. |
n |
number of edges. |
A thresholded matrix.
Mean correlation of time series across subjects.
corTs(ts)
corTs(ts)
ts |
a 3D time series time series x nodes x subjects. |
M correlation matrix.
# create some sample data with 200 samples, # 5 nodes, and 2 subjects ts = array(rnorm(200*5*2), dim=c(200,5,2)) M = corTs(ts)
# create some sample data with 200 samples, # 5 nodes, and 2 subjects ts = array(rnorm(200*5*2), dim=c(200,5,2)) M = corTs(ts)
A group is a list containing restructured data from subejcts for easier group analysis.
dgm.group(subj)
dgm.group(subj)
subj |
a list of subjects. |
group a list.
# create some sample data with 200 samples, # 3 nodes, and 2 subjects ts = array(rnorm(200*3*2), dim=c(200,3,2)) mysubs=list() mysubs[[1]]=subject(ts[,,1]) mysubs[[2]]=subject(ts[,,2]) g=dgm.group(mysubs)
# create some sample data with 200 samples, # 3 nodes, and 2 subjects ts = array(rnorm(200*3*2), dim=c(200,3,2)) mysubs=list() mysubs[[1]]=subject(ts[,,1]) mysubs[[2]]=subject(ts[,,2]) g=dgm.group(mysubs)
Quick diagnostics on delta.
diag.delta(path, id, nodes)
diag.delta(path, id, nodes)
path |
path to results files. |
id |
subject identifier. |
nodes |
number of nodes. |
x array node model's delta
Calculate the log predictive likelihood for a specified set of parents and a fixed delta.
dlm.lpl(Yt, Ft, delta, priors = priors.spec())
dlm.lpl(Yt, Ft, delta, priors = priors.spec())
Yt |
the vector of observed time series, length |
Ft |
the matrix of covariates, dim = number of thetas ( |
delta |
discount factor (scalar). |
priors |
list with prior hyperparameters. |
mt |
the vector or matrix of the posterior mean (location parameter), dim = |
Ct |
and |
Rt |
and |
nt |
and |
S |
the vector of the point estimate for the observation variance |
ft |
the vector of the one-step forecast location parameter with length |
Qt |
the vector of the one-step forecast scale parameter with length |
ets |
the vector of the standardised forecast residuals with length |
lpl |
the vector of the Log Predictive Likelihood with length |
West, M. & Harrison, J., 1997. Bayesian Forecasting and Dynamic Models. Springer New York.
data("utestdata") Yt = myts[,1] Ft = t(cbind(1,myts[,2:5])) m = dlm.lpl(Yt, Ft, 0.7)
data("utestdata") Yt = myts[,1] Ft = t(cbind(1,myts[,2:5])) m = dlm.lpl(Yt, Ft, 0.7)
Calculate the location and scale parameters for the time-varying coefficients given all the observations. West, M. & Harrison, J., 1997. Bayesian Forecasting and Dynamic Models. Springer New York.
dlm.retro(mt, CSt, RSt, nt, dt)
dlm.retro(mt, CSt, RSt, nt, dt)
mt |
the vector or matrix of the posterior mean (location parameter), dim = |
CSt |
the posterior scale matrix with dim = |
RSt |
the prior scale matrix with dim = |
nt |
vector of the updated hyperparameters for the precision |
dt |
vector of the updated hyperparameters for the precision |
smt = the location parameter of the retrospective distribution with dimension p x T
sCt = the scale matrix of the retrospective distribution with dimension p x p x T
C++ implementation of the dlm.lpl
dlmLplCpp(Yt_, Ft_, delta, m0_, CS0_, n0, d0)
dlmLplCpp(Yt_, Ft_, delta, m0_, CS0_, n0, d0)
Yt_ |
the vector of observed time series |
Ft_ |
the matrix of covariates |
delta |
discount factor |
m0_ |
the value of the prior mean |
CS0_ |
controls the scaling of the prior variance |
n0 |
prior hypermarameter |
d0 |
prior hypermarameter |
A function for an exhaustive search, calculates the optimum value of the discount factor.
exhaustive.search( Data, node, nbf = 15, delta = seq(0.5, 1, 0.01), cpp = TRUE, priors = priors.spec() )
exhaustive.search( Data, node, nbf = 15, delta = seq(0.5, 1, 0.01), cpp = TRUE, priors = priors.spec() )
Data |
Dataset with dimension number of time points T x Number of nodes Nn. |
node |
The node to find parents for. |
nbf |
Log Predictive Likelihood will sum from (and including) this time point. |
delta |
a vector of potential values for the discount factor. |
cpp |
boolean true (default): fast C++ implementation, false: native R code. |
priors |
list with prior hyperparameters. |
model.store a matrix with the model, LPL and chosen discount factor for all possible models. runtime an estimate of the run time of the function, using proc.time().
data("utestdata") result=exhaustive.search(myts,3)
data("utestdata") result=exhaustive.search(myts,3)
Get adjacency and associated likelihoods (LPL) and disount factros (df) of winning models.
getAdjacency(winner, nodes)
getAdjacency(winner, nodes)
winner |
2D matrix. |
nodes |
number of nodes. |
adj, 2D adjacency matrix.
Checks results and returns job number for incomplete nodes.
getIncompleteNodes(path, ids, Nr, Nn)
getIncompleteNodes(path, ids, Nr, Nn)
path |
path to results. |
ids |
subjects ids. |
Nr |
Number of runs. |
Nn |
Number of nodes. |
jobs job numbers
Extract specific parent model with assocated df and ME from complete model space.
getModel(models, parents)
getModel(models, parents)
models |
a 2D model matrix. |
parents |
a vector with parent nodes. |
mod specific parent model.
data("utestdata") r=exhaustive.search(myts,3) # get model with parents 1, 2, and 4. m=getModel(r$model.store,c(1,2,4))
data("utestdata") r=exhaustive.search(myts,3) # get model with parents 1, 2, and 4. m=getModel(r$model.store,c(1,2,4))
Get model number from a set of parents.
getModelNr(models, parents)
getModelNr(models, parents)
models |
a 2D model matrix. |
parents |
a vector with parent nodes. |
nr model number.
Get winner network by maximazing log predictive likelihood (LPL) from a set of models.
getWinner(models, nodes)
getWinner(models, nodes)
models |
2D matrix, or 3D models x node. |
nodes |
number of nodes. |
winner array with highest scored model(s).
Plots network as adjacency matrix.
gplotMat( adj, title = NULL, colMapLabel = NULL, hasColMap = NULL, lim = c(0, 1), gradient = c("white", "orange", "red"), nodeLabels = waiver(), axisTextSize = 12, xAngle = 0, titleTextSize = 12, barWidth = 1, textSize = 12 )
gplotMat( adj, title = NULL, colMapLabel = NULL, hasColMap = NULL, lim = c(0, 1), gradient = c("white", "orange", "red"), nodeLabels = waiver(), axisTextSize = 12, xAngle = 0, titleTextSize = 12, barWidth = 1, textSize = 12 )
adj |
2D adjacency matrix. |
title |
title. |
colMapLabel |
label for colormap. |
hasColMap |
FALSE turns off color map, default is NULL (on). |
lim |
vector with min and max value, data outside this range will be removed. |
gradient |
gradient colors. |
nodeLabels |
node labels. |
axisTextSize |
text size of the y and x tick labels. |
xAngle |
orientation of the x tick labels. |
titleTextSize |
text size of the title. |
barWidth |
width of the colorbar. |
textSize |
width of the colorbar. |
# Generate some sample binary 5-node network structures for N=20, then compute # proportion at each edge N=20 x = array(rbinom(n=5*5*N, size=1, prob=0.30), dim=c(5,5,N)) A = apply(x, c(1,2), mean) gplotMat(A, title = "network", colMapLabel = '%', barWidth = 0.3)
# Generate some sample binary 5-node network structures for N=20, then compute # proportion at each edge N=20 x = array(rbinom(n=5*5*N, size=1, prob=0.30), dim=c(5,5,N)) A = apply(x, c(1,2), mean) gplotMat(A, title = "network", colMapLabel = '%', barWidth = 0.3)
Merges forward and backward model store.
mergeModels(fw, bw)
mergeModels(fw, bw)
fw |
forward model. |
bw |
backward model. |
m model store.
A function to generate all the possible models.
model.generator(Nn, node)
model.generator(Nn, node)
Nn |
number of nodes; the number of columns of the dataset can be used. |
node |
The node to find parents for. |
output.model = a matrix with dimensions (Nn-1) x number of models, where number of models = 2^(Nn-1).
m=model.generator(5,1)
m=model.generator(5,1)
Simulation 22 5 node net from Smith et al. 2011 (only first subject).
Runs exhaustive search on a single node and saves results in txt file.
node( X, n, id = NULL, nbf = 15, delta = seq(0.5, 1, 0.01), cpp = TRUE, priors = priors.spec(), path = getwd(), method = "exhaustive" )
node( X, n, id = NULL, nbf = 15, delta = seq(0.5, 1, 0.01), cpp = TRUE, priors = priors.spec(), path = getwd(), method = "exhaustive" )
X |
array with dimensions timeseries x nodes. |
n |
node number. |
id |
subject ID. If set, results are saved to a txt file. |
nbf |
Log Predictive Likelihood will sum from (and including) this time point. |
delta |
a vector of potential values for the discount factor.#' |
cpp |
boolean true (default): fast C++ implementation, false: native R code. |
priors |
list with prior hyperparameters. |
path |
a path where results are written. |
method |
can be exhaustive (default), forward, backward, or both. |
store list with results.
Patel.
patel(X, lower = 0.1, upper = 0.9, bin = 0.75, TK = 0, TT = 0)
patel(X, lower = 0.1, upper = 0.9, bin = 0.75, TK = 0, TT = 0)
X |
time x node 2D matrix. |
lower |
percentile cuttoff. |
upper |
percentile cuttoff for 0-1 scaling. |
bin |
threshold for conversion to binary values. |
TK |
significance threshold for connection strength kappa. |
TT |
significance threshold for direction tau. |
PT list with strengths kappa, direction tau, and net structure.
# Generate some sample data x=array(rnorm(200*5), dim=c(200,5)) p=patel(x)
# Generate some sample data x=array(rnorm(200*5), dim=c(200,5)) p=patel(x)
A group is a list containing restructured data from subejcts for easier group analysis.
patel.group(subj)
patel.group(subj)
subj |
a list of subjects. |
group a list.
# create some sample data with 200 samples, # 3 nodes, and 2 subjects ts = array(rnorm(200*3*2), dim=c(200,3,2)) mysubs=list() mysubs[[1]]=patel(ts[,,1]) mysubs[[2]]=patel(ts[,,2]) g=patel.group(mysubs)
# create some sample data with 200 samples, # 3 nodes, and 2 subjects ts = array(rnorm(200*3*2), dim=c(200,3,2)) mysubs=list() mysubs[[1]]=patel(ts[,,1]) mysubs[[2]]=patel(ts[,,2]) g=patel.group(mysubs)
Performance of estimates, such as sensitivity, specificity, and more.
perf(x, true)
perf(x, true)
x |
estimated binary network matrix. |
true |
true binary network matrix. |
p list with results.
trueNet=matrix(c(0,0,0,1,0,0,0,1,0),3,3) am=matrix(c(0,0,0,1,0,1,0,1,0),3,3) p=perf(am, trueNet)
trueNet=matrix(c(0,0,0,1,0,0,0,1,0),3,3) am=matrix(c(0,0,0,1,0,1,0,1,0),3,3) p=perf(am, trueNet)
Specify the priors. Without inputs, defaults will be used.
priors.spec(m0 = 0, CS0 = 3, n0 = 0.001, d0 = 0.001)
priors.spec(m0 = 0, CS0 = 3, n0 = 0.001, d0 = 0.001)
m0 |
the value of the prior mean at time |
CS0 |
controls the scaling of the prior variance matrix |
n0 |
prior hyperparameter of precision |
d0 |
prior hyperparameter of precision |
At time t=0
, (theta_{0} | D_{0}, phi) ~ N(m_{0},C*_{0} x phi^{-1})
,
where D_{0}
denotes the set of initial information.
priors
a list with the prior hyperparameters. Relevant to dlm.lpl,
exhaustive.search, node, subject
.
West, M. & Harrison, J., 1997. Bayesian Forecasting and Dynamic Models. Springer New York.
pr=priors.spec() pr=priors.spec(n0=0.002)
pr=priors.spec() pr=priors.spec(n0=0.002)
Comparing two population proportions on the network with FDR correction.
prop.nettest(x1, n1, x2, n2, alpha = 0.05, fdr = 0.05)
prop.nettest(x1, n1, x2, n2, alpha = 0.05, fdr = 0.05)
x1 |
network matrix with successes in group 1. |
n1 |
sample size group 1. |
x2 |
network matrix with successes in group 2. |
n2 |
sample size group 2. |
alpha |
alpha level for uncorrected test. |
fdr |
alpha level for FDR. |
store List with test statistics and p-values.
Get pruned adjacency network.
pruning(adj, models, winner, e = 20)
pruning(adj, models, winner, e = 20)
adj |
list with network adjacency from getAdjacency(). |
models |
list of models. |
winner |
matrix 2D with winning models. |
e |
bayes factor for network pruning. |
thr list with pruned network adjacency.
data("utestdata") # select only 3-nodes to speed-up this example sub=subject(myts[,1:3]) p=pruning(sub$adj, sub$models, sub$winner)
data("utestdata") # select only 3-nodes to speed-up this example sub=subject(myts[,1:3]) p=pruning(sub$adj, sub$models, sub$winner)
Randomization test for Patel's kappa. Creates a distribution of values kappa under the null hypothesis.
rand.test(X, alpha = 0.05, K = 1000)
rand.test(X, alpha = 0.05, K = 1000)
X |
time x node x subjects 3D matrix. |
alpha |
sign. level |
K |
number of randomizations, default is 1000. |
stat lower and upper significance thresholds.
# create some sample data with 200 samples, # 3 nodes, and 2 subjects ts = array(rnorm(200*3*5), dim=c(200,3,5)) mysubs=list() mysubs[[1]]=patel(ts[,,1]) mysubs[[2]]=patel(ts[,,2]) mysubs[[3]]=patel(ts[,,3]) mysubs[[4]]=patel(ts[,,4]) mysubs[[5]]=patel(ts[,,5]) g=patel.group(mysubs) r=rand.test(rmdiag(g$kappa), K=100)
# create some sample data with 200 samples, # 3 nodes, and 2 subjects ts = array(rnorm(200*3*5), dim=c(200,3,5)) mysubs=list() mysubs[[1]]=patel(ts[,,1]) mysubs[[2]]=patel(ts[,,2]) mysubs[[3]]=patel(ts[,,3]) mysubs[[4]]=patel(ts[,,4]) mysubs[[5]]=patel(ts[,,5]) g=patel.group(mysubs) r=rand.test(rmdiag(g$kappa), K=100)
Reads single subject's network from txt files.
read.subject(path, id, nodes, modelStore = TRUE)
read.subject(path, id, nodes, modelStore = TRUE)
path |
path. |
id |
identifier to select all subjects' nodes, e.g. pattern containing subject ID and session number. |
nodes |
number of nodes. |
modelStore |
can be set to false to save memory. |
store list with results.
Reshapes a 2D concatenated time series into 3D according to no. of subjects and volumes.
reshapeTs(ts, N, V)
reshapeTs(ts, N, V)
ts |
a 2D time series volumes x nodes. |
N |
No. of subjects. |
V |
No. of volumes. |
M 3D matrix, time series x nodes x subjects.
# Let's say subjects are concatenated in a 2D matrix # (samples x nodes), with each having 200 samples. # generate some sample data N=20 Nn=5 x = array(rnorm(200*N*Nn), dim=c(200*N,Nn)) ts = reshapeTs(x,N,200)
# Let's say subjects are concatenated in a 2D matrix # (samples x nodes), with each having 200 samples. # generate some sample data N=20 Nn=5 x = array(rnorm(200*N*Nn), dim=c(200*N,Nn)) ts = reshapeTs(x,N,200)
Removes diagonal of NA's from matrix.
rmdiag(M)
rmdiag(M)
M |
Matrix |
matrix with diagonal of 0's.
M=array(rnorm(3*3), dim=c(3,3)) M[as.logical(diag(3))] = NA M=rmna(M)
M=array(rnorm(3*3), dim=c(3,3)) M[as.logical(diag(3))] = NA M=rmna(M)
Removes NAs from matrix.
rmna(M)
rmna(M)
M |
Matrix |
matrix with NAs removed.
M=array(NA, dim=c(3,3)) M[1,2]=0.9 M=rmna(M)
M=array(NA, dim=c(3,3)) M[1,2]=0.9 M=rmna(M)
Removes reciprocal connections in the lower diagnoal of the network matrix.
rmRecipLow(M)
rmRecipLow(M)
M |
adjacency matrix |
M adjacency matrix without reciprocal connections.
Scaling data. Zero centers and scales the nodes (SD=1).
scaleTs(X)
scaleTs(X)
X |
time x node 2D matrix, or 3D with subjects as the 3rd dimension. |
S centered and scaled matrix.
# create some sample data ts = array(rnorm(200*5, mean=5, sd=10), dim=c(200,5)) ts = scaleTs(ts)
# create some sample data ts = array(rnorm(200*5, mean=5, sd=10), dim=c(200,5)) ts = scaleTs(ts)
Stepise backward non-exhaustive greedy search, calculates the optimum value of the discount factor.
stepwise.backward( Data, node, nbf = 15, delta = seq(0.5, 1, 0.01), max.break = TRUE, priors = priors.spec() )
stepwise.backward( Data, node, nbf = 15, delta = seq(0.5, 1, 0.01), max.break = TRUE, priors = priors.spec() )
Data |
Dataset with dimension number of time points |
node |
The node to find parents for. |
nbf |
The Log Predictive Likelihood will sum from (and including) this time point. |
delta |
A vector of values for the discount factor. |
max.break |
If |
priors |
List with prior hyperparameters. |
model.store The parents, LPL and chosen discount factor for the subset of models scored using this method.
Stepise combine
stepwise.combine( Data, node, nbf = 15, delta = seq(0.5, 1, 0.01), max.break = TRUE, priors = priors.spec() )
stepwise.combine( Data, node, nbf = 15, delta = seq(0.5, 1, 0.01), max.break = TRUE, priors = priors.spec() )
Data |
Dataset with dimension number of time points |
node |
The node to find parents for. |
nbf |
The Log Predictive Likelihood will sum from (and including) this time point. |
delta |
A vector of values for the discount factor. |
max.break |
If |
priors |
List with prior hyperparameters. |
model.store The parents, LPL and chosen discount factor for the subset of models scored using this method.
Stepise forward non-exhaustive greedy search, calculates the optimum value of the discount factor.
stepwise.forward( Data, node, nbf = 15, delta = seq(0.5, 1, 0.01), max.break = TRUE, priors = priors.spec() )
stepwise.forward( Data, node, nbf = 15, delta = seq(0.5, 1, 0.01), max.break = TRUE, priors = priors.spec() )
Data |
Dataset with dimension number of time points |
node |
The node to find parents for. |
nbf |
The Log Predictive Likelihood will sum from (and including) this time point. |
delta |
A vector of values for the discount factor. |
max.break |
If |
priors |
List with prior hyperparameters. |
model.store The parents, LPL and chosen discount factor for the subset of models scored using this method.
Estimate subject's full network: runs exhaustive search on very node.
subject( X, id = NULL, nbf = 15, delta = seq(0.5, 1, 0.01), cpp = TRUE, priors = priors.spec(), path = getwd(), method = "exhaustive" )
subject( X, id = NULL, nbf = 15, delta = seq(0.5, 1, 0.01), cpp = TRUE, priors = priors.spec(), path = getwd(), method = "exhaustive" )
X |
array with dimensions timeseries x nodes. |
id |
subject ID. If set, results are saved to a txt file. |
nbf |
Log Predictive Likelihood will sum from (and including) this time point. |
delta |
a vector of potential values for the discount factor. |
cpp |
boolean true (default): fast C++ implementation, false: native R code. |
priors |
list with prior hyperparameters. |
path |
a path where results are written. |
method |
ether exhaustive, foward, backward, or both. |
store list with results.
data("utestdata") # select only 3-nodes to speed-up this example sub=subject(myts[,1:3]) sub=subject(myts[,1:3], method="both")
data("utestdata") # select only 3-nodes to speed-up this example sub=subject(myts[,1:3]) sub=subject(myts[,1:3], method="both")
Turns asymetric network into an symmetric network. Helper function to determine the detection of a connection while ignoring directionality.
symmetric(M)
symmetric(M)
M |
3D matrix nodes x nodes x subjects |
3D matrix nodes x nodes x subjects
M=array(NA, dim=c(3,3,2)) M[,,1]=matrix(c(0,0,0,1,0,0,0,1,0),3,3) M[,,2]=matrix(c(0,0,0,1,0,0,0,0,0),3,3) M_=symmetric(M)
M=array(NA, dim=c(3,3,2)) M[,,1]=matrix(c(0,0,0,1,0,0,0,1,0),3,3) M[,,2]=matrix(c(0,0,0,1,0,0,0,0,0),3,3) M_=symmetric(M)
Comparing connectivity strenght of two groups with FDR correction.
ttest.nettest(m, g, alpha = 0.05, fdr = 0.05, perm = FALSE, n_perm = 9999)
ttest.nettest(m, g, alpha = 0.05, fdr = 0.05, perm = FALSE, n_perm = 9999)
m |
matrix with Nn x Nn x N. |
g |
group assignment, vector of type factor of size N. |
alpha |
alpha level for uncorrected test. |
fdr |
FDR alpha level. |
perm |
optional permuation test, default is false. |
n_perm |
number of permutations. |
store List with test statistics and p-values.
Some LPL values (n2 parent of n1 Simulation 22) to test against.