Home      Log In      Contacts      FAQs      INSTICC Portal
 

Keynote Lectures

Native Cloud Applications - Why Virtual Machines, Images and Containers Miss the Point!
Frank Leymann, University of Stuttgart, Germany

Semantic Web Evolution - Tectonic Quake or Gentle Drift?
Jérôme Euzenat, INRIA and Univ. Grenoble Alpes, France

RTTMM: Role Based 3-Tier Mobility Model for Evaluation of Delay Tolerant Routing Protocols in Post Disaster Situation
Mohammed Atiquzzaman, University of Oklahoma, United States

Dropout Rates of Regular Courses and MOOCs
Leon Rothkrantz, Delft University of Technology, Netherlands

 

Native Cloud Applications - Why Virtual Machines, Images and Containers Miss the Point!

Frank Leymann
University of Stuttgart
Germany
 

Brief Bio
Frank Leymann is a full professor of computer science and director of the Institute of Architecture of Application Systems at the University of Stuttgart, Germany. His research interests include service oriented computing and middleware, workflow- and business process management, Cloud Computing, transaction processing, integration technology, and architecture patterns. Before accepting his professor position he worked for two decades for IBM Software Group building database and middleware products: He built tools supporting conceptual and physical database design for DB2; built performance prediction and monitoring tools for an object database system; was co-architect of a repository system; built both, a universal relation system as well as a complex object database system on top of DB2; and was co-architect of the MQSeries family. In parallel to that, Frank worked continuously since the late eighties on workflow technology and became the father of IBM's workflow product set. As an IBM Distinguished Engineer and elected member of the IBM Academy of Technology he contributed to the architecture and strategy of IBM's entire middleware stack as well as IBM's On Demand Computing strategy. From 2000 on, Frank worked as co-architect of the Web Service stack. He is co-author of many Web Service specifications, including WSFL, WS-Addressing, WS-Metadata Exchange, WS-Business Agreement, the WS-Resource Framework set of specifications, WS-HumanTask and BPEL4People; together with Satish Thatte, he was the driving force behind BPEL4WS. Also, he is co-author of BPMN 2.0 and TOSCA.


Abstract
Due to the current hype around cloud computing, the term “native cloud application” becomes increasingly popular. It suggests an application to fully benefit from all the advantages of cloud computing. Many users tend to consider their applications as cloud native if the application is just bundled in a virtual machine image or a container. Even though virtualization is fundamental for implementing the cloud computing paradigm, a virtualized application does not automatically cover all properties of a native cloud application. In this work, we propose a definition of a native cloud application by specifying the set of characteristic architectural properties, which a native cloud application has to provide. We demonstrate the importance of these properties by introducing a typical scenario from current practice that moves an application to the cloud. The identified properties and the scenario especially show why virtualization alone is insufficient to build native cloud applications. Finally, we outline how native cloud applications respect the core principles of service-oriented architectures, which are currently hyped a lot in the form of microservice architectures.



 

 

Semantic Web Evolution - Tectonic Quake or Gentle Drift?

Jérôme Euzenat
INRIA and Univ. Grenoble Alpes
France
 

Brief Bio
Jérôme Euzenat is senior research scientist at INRIA, France. He holds a PhD (1990) and a habilitation (1999) in computer science, both from the University of Grenoble. He has contributed to reasoning maintenance systems, object-based knowledge representation, symbolic temporal granularity, collaborative knowledge base construction, multimedia document adaptationn belief revision and semantic web technologies. His all time interests are tied to the relationships holding between various representations of the same situation. Dr Euzenat has set up and leads the INRIA Exmo team devoted to "Computer-mediated communication of structured knowledge''. He played a leading role in the definition and development of the ontology matching field.


Abstract
Our societies produce knowledge and data at an ever increasing pace. Semantic web technologies have been incredibly successful at exploiting them to the point that they are used under the hood of most search engines. Their value is not restricted to the web: they can be used for other purposes (internet of things, smart cities for mentioning only the currently fashionable ones).

This infrastructure may reveal a collossus with feet of clay. Knowledge and data are generated in an independent manner by autonomous providers such as individuals or companies; the world is changing continuously; our knowledge of it is changing and expanding with new discoveries. If knowledge does not evolve, it will freeze and then die, like dinosaurs did. But, this evolution is not compatible anymore with man-made curation and maintenance. We need to ensure that knowledge representations evolve seamlessly and continuously.

In this talk Jerome will discuss the possible options to cope with knowledge evolution and appropriate actions when inconsistency arises. One inspiring approach takes example of how human beings deal with knowledge evolution through acknowledging failure and applying repair actions when they occur.



 

 

RTTMM: Role Based 3-Tier Mobility Model for Evaluation of Delay Tolerant Routing Protocols in Post Disaster Situation

Mohammed Atiquzzaman
University of Oklahoma
United States
 

Brief Bio
Mohammed Atiquzzaman (Senior Member, IEEE) obtained his M.S. and Ph.D. in Electrical Engineering and Electronics from the University of Manchester (UK) in 1984 and 1987, respectively.  He currently holds the Edith J Kinney Gaylord Presidential professorship in the School of Computer Science at the University of Oklahoma.
Dr. Atiquzzaman is the Editor-in-Chief of Journal of Networks and Computer Applications, the founding Editor-in-Chief of Vehicular Communications, and serves/served on the editorial boards of many journals including IEEE Communications Magazine, Real Time Imaging Journal, International Journal of Communication Networks and Distributed Systems and Journal of Sensor Networks and International Journal of Communication Systems. He co-chaired the IEEE High Performance Switching and Routing Symposium (2003, 2011), IEEE Globecom and ICC (2014, 2012, 2010, 2009, 2007, 2006), IEEE VTC (2013)  and the SPIE Quality of Service over Next Generation Data Networks conferences (2001, 2002, 2003). He was the panels co-chair of INFOCOM’05, and is/has been in the program committee of many conferences such as INFOCOM, Globecom, ICCCN, ICCIT, Local Computer Networks, and serves on the review panels at the National Science Foundation. He is the current Chair of IEEE Communication Society Technical Committee on Communications Switching and Routing.
Dr. Atiquzzaman received IEEE Communication Society's Fred W. Ellersick Prize, and NASA Group Achievement Award for "outstanding work to further NASA Glenn Research Center's effort in the area of Advanced Communications/Air Traffic Management's Fiber Optic Signal Distribution for Aeronautical Communications" project. He is the co-author of the book “Performance of TCP/IP over ATM networks” and has over 270 refereed publications, available at www.cs.ou.edu/~atiq.
His current research interests are in areas of transport protocols, wireless and mobile networks, ad hoc networks, satellite networks, power-aware networking, and optical communications. His research has been funded by National Science Foundation (NSF), National Aeronautics and Space Administration (NASA), and U.S. Air Force, Cisco and Honeywell.


Abstract
In Internet of Things (IoT) the devices are interconnected through Internet with several redundant paths, but they are still vulnerable to the effects of large scale disasters such as earthquakes and floods. The disaster area may be disconnected from the rest of the Internet and the need arises to get information about the victims. Adhoc networks like MANETs and DTNs are most suitable to support the communication in partitioned networks, such as a network in a post disaster situation. Even an adhoc network becomes one of the essential network architecture in IoT and attracted lots of attention in the last decade. The disaster affects the several regions with different intensities called each region as disaster event which are located nearer to each other.
Each disaster event is assigned a group of rescue entities with hand held IoT device, where they perform the tactical operation. The movement pattern of the rescue entities in a post disaster area is described by a mobility model which is used to evaluate the routing protocols for post disaster scenario networks. Existing mobility models for post disaster scenarios do not distribute the rescue entities in proportion to the intensity of disaster events in the case of multiple events occurring simultaneously. In this work, we propose the Role-based 3-Tier Mobility Model (RTTMM) to mimic the movement pattern of different rescue entities involved in the disaster relief operation by distributing them based on the proportion of the intensity of the disaster event.
Our model generates the mobility traces of the rescue entities, which are fed as input to the DTN routing protocols. We also evaluate the performance of existing DTN routing protocols using the traces obtained from RTTMM.



 

 

Dropout Rates of Regular Courses and MOOCs

Leon Rothkrantz
Delft University of Technology
Netherlands
 

Brief Bio
Leon Rothkrantz received his MSc degree in mathematics from the University of Utrecht, The Netherlands in 1971, the PhD degee in mathematics from the University of Amsterdam, The Netherlands in 1980 and the MSc degree in psychology from the University of Leiden, The Netherlands. He received a Doctor of honours from the Czech Technical University in Prague. As an Associate professor he joined the Man machine Interaction group and Knowledge based system group of Delft University of Technology in 1992. Since 2008 he is appointed as full professor at the Netherlands Defense Academy at Den Helder. His research interest are: multimodal communication, automatic speech recognition, recognition of facial expression, pattern recognition and (dynamic) routing. Since 1999 he published more than 250 papers in Journals an Conference Proceedings and supervised 12 PhD students and 10 MSc students during the thesis project.


Abstract
Recently we observe an enormous grow of Massive Open Online Courses (MOOCs). Consortia like edX started by Harvard and MIT stimulated many outstanding Universities to develop their own MOOCs and to join the consortium. At start most MOOCs were developed by gifted teachers and the course was similar to a digital recording of regular classroom lectures. The underlying didactic models were similar to models used in regular classroom lectures. Current xMOOCs are composed of short blocks of video lectures, simulations, movies and assignments with real life problems. Learning analytics research shows that transferred classroom models are not the most optimal instruction models.
One of the main differences between MOOCs and regular classroom lectures is that the role of the teacher is minimised. The main role of the teacher is to design the course material, instructional design and transfer of knowledge. But his role as course manager should be implemented in the course material and the interaction teacher and student is minimal. In current cMOOCs students are supposed to cooperate in learning networks. Given the huge amount of participating students real life interaction with teachers or tutors is no longer an option.
In most current MOOCs, self-management of students is assumed. Students select their courses, plan their study activity and take initiatives to contact fellow students for joint study activities. There is a focus on 21st century skills such as critical reflection, cooperating, creativity, ability to handle big data and problem solving. The connectivist learning theory supports network learning. Unfortunately it proves that giving students the freedom to manage their own study is one of the causes of bad success rates. Only a minority of students is able to control their study behaviour. To improve the success rates of MOOCs specific didactic models should be used.
New requirements for the Learning Management System (LMS) are needed to improve the learning process and to realise the learning goals. In network based leaning some roles of the teacher should be fulfilled by fellow students. Special attention is needed to form automatically heterogeneous groups of students to perform project work with a huge group of students. Special experiments using specific didactic models as inquiry based learning will be discussed. MOOCs are nowadays employed in honours-programmes. Students have the freedom to compose their own programme, to develop their own abilities and competences, being a member of different learning communities and with a lot of community engagement and applied problem solving. But the question is if this only holds for the happy few of for the majority of students? MOOCs are supposed to enable students to develop themselves according to the “Bildung” principle. What is the most appropriate didactic model?



footer