Skip to Main Content
The online home for the publications of the American Statistical Association
900
Views
25
CrossRef citations to date
Altmetric

Theory and Methods

Learning Sparse Causal Gaussian Networks With Experimental Intervention: Regularization and Coordinate Descent

Pages 288-300
Received 01 Oct 2011
Accepted author version posted online: 21 Dec 2012
Published online:15 Mar 2013
 
Translator disclaimer

Causal networks are graphically represented by directed acyclic graphs (DAGs). Learning causal networks from data is a challenging problem due to the size of the space of DAGs, the acyclicity constraint placed on the graphical structures, and the presence of equivalence classes. In this article, we develop an L 1-penalized likelihood approach to estimate the structure of causal Gaussian networks. A blockwise coordinate descent algorithm, which takes advantage of the acyclicity constraint, is proposed for seeking a local maximizer of the penalized likelihood. We establish that model selection consistency for causal Gaussian networks can be achieved with the adaptive lasso penalty and sufficient experimental interventions. Simulation and real data examples are used to demonstrate the effectiveness of our method. In particular, our method shows satisfactory performance for DAGs with 200 nodes, which have about 20,000 free parameters. Supplementary materials for this article are available online.

Acknowledgments

This work was supported by the National Science Foundation grant DMS-1055286 to Q.Z. The authors thank the editor, the associate editor, and two referees for helpful comments and suggestions, which significantly improved the article.

People also read