• 2018-07
  • 2018-10
  • 2018-11
  • 2019-04
  • 2019-05
  • 2019-06
  • 2019-07
  • 2019-08
  • 2019-09
  • 2019-10
  • 2019-11
  • 2019-12
  • 2020-01
  • 2020-02
  • 2020-03
  • 2020-04
  • 2020-05
  • 2020-06
  • 2020-07
  • 2020-08
  • 2020-09
  • 2020-10
  • 2020-11
  • 2020-12
  • 2021-01
  • 2021-02
  • 2021-03
  • 2021-04
  • 2021-05
  • The second constraint indicates that if the relationship


    The second constraint indicates that if the relationship between two components A and B is two-way, then there are at least two connectors, one with origin A and destination B, and another connector with origin B and destination A (OCL2 of Fig. 4). Therefore, at least one output port of the first component must be connected to an input port of the second, and vice versa. Furthermore, a component must have at least one port, however, the restriction is stronger, since it DL-AP5 must have at least one input port (OCL3 of Fig. 4). We therefore check that among a component's ports, there is at least one that is an InputPort. The DSL shown in Fig. 3 allows us to formally describe the structure of our component-based architectures. Therefore, our proposal must start from some initial applications (UI in the proposed example domain). Then, these initial application are manually represented (by developers) though their corresponding architectures, performing an abstraction process. Then, the applications (which are being executed on the client side) and the architectural models (which are being managed on the server side) must always be synchronized. On one hand, the changes performed on the client side are communicated to the server side. On the other hand, if the cloud service changes (proactively) the architecture (adding new components, removing unnecessary elements, etc.), the changes in the new model are propagated to the client side. In the case of web user interfaces, the HTML code of the user interface is modified and reinterpreted at run-time.
    COTSgets-as-a-service The main purpose of this cloud service is to offer the operations required to ensure these capabilities of the COTSget component-based architectures. It therefore includes: (a) management of the COTSgets specifications, (b) management of the COTSgets-based architectures, (c) instantiation of COTSgets components, (d) initialization of user applications based on the architectures, and (e) communication of components belonging to an architecture. All these capabilities are offered at run-time to dynamically provide architecture and component models, thus using the Models-as-a-Service term for establishing the concept of COTSgets-as-a-Service. Furthermore, this service makes the main parts (such as the databases of components and architectures, the platform independent server or the platform-independent server) highly scalable and distributable as additional benefits derived from the cloud computing [44]. Moreover, the purpose of this service is to support interactive systems running on different platforms. Nevertheless, the current version of this cloud service only supports the management of component-based applications on the web platform.
    Experimentation process To this end, we performed several different tests to analyze our setup's behavior by taking three parameters that could affect performance into account. These parameters are: (a) the size of the initial GUI loaded and shown to the user, (b) the coupling degree of the architecture, i.e., the number of connections between components, and (c) the number of concurrent users accessing at the same time. We know that there are other input parameters affecting response times, such as network latency or the browser used by the client, among others. However, to ensure correct system performance, only experiments with manageable or limitable features were done. To execute these experiments and measure the performance times, we used a computer with an Intel(R) Core(TM) i5 CPU 660 @ 3.33GHz, with 4GB of physical memory and running under Windows 8.1 Professional 64 bits operating system. This machine included the platform-dependent and independent servers. For testing purposes, a web application was developed as a client. Each response time is calculated as the average of 100 repetitions of the same test unit. As the proposed infrastructure has three layers (see Fig. 11), firstly, we evaluated the times measured in (A), (B), and (C), as shows Fig. 15. The time obtained in (A) is the execution time of the functions implemented in the independent layer server (COScore). Then, the time in (B) is derived from (A) but lungs includes the time it takes for the behavior implemented in the platform-dependent layer. Finally, (C) represents the total time elapsed between the time the client calls and the moment a response is received and shown to the user. Fig. 16a shows the response times for the GUI initialization when the number of components is varied. The differences between times measured in (A), (B) and (C) are very small. Therefore, the following performance times are focused on the time (C), because it is the longest time (in fact, it is equal to the total process time) and corresponds to the real time the user has to wait for a response from the service.