Apache Airavata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Current »

1. Science Gateways

[1] B. Erickson, R. Singh, and A. E. Evrard, “A high throughput workflow environment for cosmological simulations,” in Proceedings of the 1st Conference of the Extreme Science and Engineering Discovery Environment: Bridging from the eXtreme to the campus and beyond, 2012, p. 34.

[2] N. Wilkins-Diehr, “A history of the TeraGrid science gateway program: a personal view,” in Proceedings of the 2011 ACM workshop on Gateway computing environments, 2011, pp. 1--12.

[3] I. Raicu, I. Foster, A. Szalay, and G. Turcu, “AstroPortal: a science gateway for large-scale astronomy data analysis,” in TeraGrid Conference, 2006, pp. 12--15.

[4] J. Ghosh, S. Marru, N. Singh, K. Vanomesslaeghe, Y. Fan, and S. Pamidighantam, “Molecular parameter optimization gateway (ParamChem): workflow management through TeraGrid ASTA,” in Proceedings of the 2011 TeraGrid Conference: Extreme Digital Discovery, 2011, p. 35.

[5] C. Herath, F. Liu, S. Marru, L. Gunathilake, M. Sosonkina, J. P. Vary, P. Maris, and M. Pierce, “Web Service andWorkflow Abstractions to Large Scale Nuclear Physics Calculations,” in Services Computing (SCC), 2012 IEEE Ninth International Conference on, 2012, pp. 703--710.

2. Workflow Engines

[1] J. Yu and R. Buyya, “A taxonomy of scientific workflow systems for grid computing,” Sigmod Record, vol. 34, no. 3, p. 44, 2005.

[2] S. Pandey, D. Karunamoorthy, and R. Buyya, “Workflow engine for clouds,” Cloud Computing, Principles and Paradigms, Wiley Series on Parallel and Distributed Computing, pp. 321--344, 2011.

[3] C. A. Mattmann, D. Freeborn, D. Crichton, B. Foster, A. Hart, D. Woollard, S. Hardman, P. Ramirez, S. Kelly, and A. Y. Chang, “A reusable process control system framework for the orbiting carbon observatory and npp sounder peate missions,” in Space Mission Challenges for Information Technology, 2009. SMC-IT 2009. Third IEEE International Conference on, 2009, pp. 165--172.

[4] I. Altintas, O. Barney, Z. Cheng, T. Critchlow, B. Ludaescher, S. Parker, A. Shoshani, and M. Vouk, “Accelerating the scientific exploration process with scientific workflows,” Journal of Physics: Conference Series, vol. 46, pp. 468--478, Sep. 2006.

[5] S. Marru, L. Gunathilake, C. Herath, P. Tangchaisin, M. Pierce, C. Mattmann, R. Singh, T. Gunarathne, E. Chinthaka, and R. Gardler, “Apache airavata: a framework for distributed applications and computational workflows,” in Proceedings of the 2011 ACM workshop on Gateway computing environments, 2011, pp. 21--28.

[6] B. Ludäscher, I. Altintas, C. Berkley, D. Higgins, E. Jaeger, M. Jones, E. A. Lee, J. Tao, and Y. Zhao, ¿Scientific workflow management and the Kepler system,¿ Concurrency and Computation: Practice and Experience, vol. 18, no. 10, pp. 1039¿1065, 2006.

[7] D. Churches, G. Gombas, A. Harrison, J. Maassen, C. Robinson, M. Shields, I. Taylor, and I. Wang, ¿Programming scientific and distributed workflow with Triana services,¿ Concurrency and Computation: Practice and Experience, vol. 18, no. 10, pp. 1021¿1037, 2006.

[8] E. Deelman, G. Singh, M. H. Su, J. Blythe, Y. Gil, C. Kesselman, G. Mehta, K. Vahi, G. B. Berriman, and J. Good, ¿Pegasus: A framework for mapping complex scientific workflows onto distributed systems,¿ Scientific Programming, vol. 13, no. 3, pp. 219¿237, 2005.

[9] S. Pandey, D. Karunamoorthy, and R. Buyya, ?Workflow engine for clouds,Cloud Computing, Principles and Paradigms, Wiley Series on Parallel andDistributed Computing, pp. 321--344, 2011.

[10] M. Minor, R. Bergmann, and S. Görg, ?Adaptive Workflow Management in the Cloud-Towards a Novel Platform as a Service,? in Proceedings of the ICCBR, 2011, pp. 131--138.

3. Challenges and Opportunities

[1] Y. Gil, E. Deelman, M. Ellisman, T. Fahringer, G. Fox, D. Gannon, C. Goble, M. Livny, L. Moreau, and J. Myers, Examining the Challenges of Scientific Workflows. Citeseer, 2006.

[2] I. Altintas, J. Wang, D. Crawl, and W. Li, “Challenges and approaches for distributed workflow-driven analysis of large-scale biological data: vision paper,” in Proceedings of the 2012 Joint EDBT/ICDT Workshops, 2012, pp. 73--78.

[3] K. K. Dam, D. Li, S. D. Miller, J. W. Cobb, M. L. Green, and C. L. Ruby, “Challenges in Data Intensive Analysis at Scientific Experimental User Facilities,” Handbook of Data Intensive Computing, pp. 249--284, 2011.

[4] E. Deelman and A. Chervenak, “Data management challenges of data-intensive scientific workflows,” in Cluster Computing and the Grid, 2008. CCGRID’08. 8th IEEE International Symposium on, 2008, pp. 687--692.

[5] J. P. Ahrens, B. Hendrickson, G. Long, S. Miller, R. Ross, and D. Williams, “Data-Intensive Science in the US DOE: Case Studies and Future Challenges,” Computing in Science & Engineering, vol. 13, no. 6, pp. 14--24, 2011.

[6] E. Deelman and Y. Gil, “Managing large-scale scientific workflows in distributed environments: Experiences and challenges,” in e-Science and Grid Computing, 2006. e-Science’06. Second IEEE International Conference on, 2006, pp. 144--144.

[7] T. Glatard, J. Montagnat, and X. Pennec, ¿Efficient services composition for grid-enabled data-intensive applications,¿ in Proceedings of the IEEE International Symposium on High Performance and Distributed Computing, 2006, pp. 333¿334.

4. Research Directions

[1]  A. Barker and J. Van Hemert, “Scientific workflow: a survey and research directions,” Parallel Processing and Applied Mathematics, pp. 746--753, 2008.

[2]  E. Vairavanathan, S. Al-Kiswany, L. B. Costa, Z. Zhang, D. S. Katz, M. Wilde, and M. Ripeanu, “A workflow-aware storage system: An opportunity study,” in Proceedings of the 2012 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (ccgrid 2012), 2012, pp. 326--334.

[3] M. L. Norman and A. Snavely, “Accelerating data-intensive science with Gordon and Dash,” in Proceedings of the 2010 TeraGrid Conference, 2010, p. 14.

[4] A. Chervenak, E. Deelman, M. Livny, M.H. Su, R. Schuler, S. Bharathi, G. Mehta, and K. Vahi, “Data placement for scientific applications in distributed environments,” in Grid Computing, 2007 8th IEEE/ACM International Conference on, 2007, pp. 267--274.

[5] G. Juve, E. Deelman, K. Vahi, G. Mehta, B. Berriman, B. P. Berman, and P. Maechling, “Data sharing options for scientific workflows on amazon ec2,” in Proceedings of the 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis, 2010, pp. 1--9.

[6] D. Yuan, Y. Yang, X. Liu, and J. Chen, “A data placement strategy in scientific cloud workflows,” Future Generation Computer Systems, vol. 26, no. 8, pp. 1200--1214, Oct. 2010.

[7] P. Nguyen and M. Halem, “A MapReduce workflow system for architecting scientific data intensive applications,” in Proceedings of the 2nd International Workshop on Software Engineering for Cloud Computing, 2011, pp. 57--63.

[8] X. Fei, S. Lu, and C. Lin, “A mapreduce-enabled scientific workflow composition framework,” in Web Services, 2009. ICWS 2009. IEEE International Conference on, 2009, pp. 663--670.

[9] N. Dun, K. Taura, and A. Yonezawa, “Easy and instantaneous processing for data-intensive workflows,” in Many-Task Computing on Grids and Supercomputers (MTAGS), 2010 IEEE Workshop on, 2010, pp. 1--10.

[10] R. S. Barga, D. Fay, D. Guo, S. Newhouse, Y. Simmhan, and A. Szalay, “Efficient scheduling of scientific workflows in a high performance computing cluster,” in Proceedings of the 6th international workshop on Challenges of large applications in distributed environments, 2008, pp. 63--68.

[11] E. Bartocci, F. Corradini, and E. Merelli, “Enacting proactive workflows engine in e-science,” Computational Science-ICCS 2006, pp. 1012--1015, 2006.

[12] M. A. Amer, A. Chervenak, and W. Chen, “Improving Scientific Workflow Performance Using Policy Based Data Placement,” in Policies for Distributed Systems and Networks (POLICY), 2012 IEEE International Symposium on, 2012, pp. 86--93.

[13] Ü. V. Çatalyürek, K. Kaya, and B. Uçar, “Integrated data placement and task assignment for scientific workflows in clouds,” in Proceedings of the fourth international workshop on Data-intensive distributed computing, 2011, pp. 45--54.

[14] J. Ekanayake, S. Pallickara, and G. Fox, “MapReduce for Data Intensive Scientific Analyses,” 2008, pp. 277--284.

[15] K.-T. Lim, S. D. Maier, and S. Zdonik, “Requirements for Science Data Bases and SciDB.”

[16] Y. Gu and R. Grossman, “SABUL: A transport protocol for grid computing,” Journal of Grid Computing, vol. 1, no. 4, pp. 377--386, 2003.

[17] A. Ramakrishnan, G. Singh, H. Zhao, E. Deelman, R. Sakellariou, K. Vahi, K. Blackburn, D. Meyers, and M. Samidi, “Scheduling data-intensiveworkflows onto storage-constrained distributed resources,” in Cluster Computing and the Grid, 2007. CCGRID 2007. Seventh IEEE International Symposium on, 2007, pp. 401--409.

[18] D. Gannon, B. Plale, M. Christie, L. Fang, Y. Huang, S. Jensen, G. Kandaswamy, S. Marru, S. Pallickara, and S. Shirasuna, “Service oriented architectures for science gateways on grid systems,” Service-Oriented Computing-ICSOC 2005, pp. 21--32, 2005.

[19] C. A. Mattmann, D. J. Crichton, N. Medvidovic, and S. Hughes, ¿A software architecture-based framework for highly distributed and data intensive scientific applications,¿ in Proceedings of the 28th international conference on Software engineering, 2006, pp. 721¿730.

[20] Rohit Agarwal, Gideon Juve, Ewa Deelman, ?Peer-to-Peer Data Sharing for Scientific Workflows on Amazon EC2?, 7th Workshop on Workflows in Support of Large-Scale Science (WORKS'12), 2012.

[21] Milinda Pathirage, Srinath Perera, Sanjiva Weerawarana, Indika Kumara, A Multi-tenant Architecture for Business Process Execution, 9th International Conference on Web Services (ICWS), 2011.

Contributed By : Sanjaya Medonsa (sanjayamrt@gmail.com) - University of Moratuwa | Pavithra Kulathilaka (pavithrask@gmail.com) - University of Moratuwa | Danushka Menikkumbura (danushka@apache.org) - University of Moratuwa

  • No labels