Skip to main content

Advances, Systems and Applications

Design, simulation and testing of a cloud platform for sharing digital fabrication resources for education

Abstract

Cloud and IoT technologies have the potential to support applica- tions that are not strictly limited to technical fields. This paper shows how digital fabrication laboratories (Fab Labs) can leverage cloud technologies to enable resource sharing and provide remote access to distributed expensive fabrication resources over the internet. We call this new concept Fabrication as a Service (FaaS), since each resource is exposed to the internet as a web ser- vice through REST APIs. The cloud platform presented in this paper is part of the NEWTON Horizon 2020 technology-enhanced learning project. The NEWTON Fab Labs architecture is described in detail, from system concep- tion and simulation to system cloud deployment and testing in NEWTON project small and large-scale pilots for teaching and learning STEM subjects.

Introduction

Most developed countries are experiencing a shortage of scientists; for example, the proportion of students graduating in STEM (Science, Technology, Engi- neering and Mathematics) subjects in Europe has reduced from 12% to 9% since 2000 [1]. There are strong evidences that young people disengagement from STEM subjects begins during secondary education [2] since students perceive scientific subjects as difficult and they consider science-related careers as less lucrative and more demanding compared to other disciplines. Govern- ments worldwide are putting great efforts in order to reverse this process and the European Union, in particular, has made a huge investment to fund large scale technology-enhanced-learning (TEL) projects like NEWTON in order to foster the passion for scientific disciplines among the younger generations. The goal of NEWTON project is avoiding early student dropout from the scientific stream, for this reason it is mainly targeted to primary and secondary school students. NEWTON aims at developing student-centered non-formal (i.e. out- side the education system) and informal (i.e. based on self-learning) teaching methodologies that leverage the latest innovative technologies to deliver more effectively learning contents and make STEM subjects more appealing. In such context, Fab Labs [3, 4] have been proven to be an innovative and effective teaching tool to attract students to STEM subjects. A Fab Lab is a small-scale workshop with a set of flexible computer-controlled tools and machines such as 3D printers, laser cutters, computer numerically-controlled (CNC) machines, printed circuit board millers and other basic fabrication tools which can allow the student to experiment and to prove theoretical concepts by prototyping. Thus, a Fab Lab is a place where the students can learn with a hands-on ap- proach based on experimentation and where they can materialize their ideas in engaging and stimulating ways and supervise the whole fabrication process. The Fab Lab concept is gaining worldwide interest and both governments and population are starting to recognize the importance of digital fabrication tech- nologies even as early as primary and secondary level education.Footnote 1 A direct consequence is that the number of Fab Labs is continuously increasing and to date there exists a worldwide network of more than 1100 Fab Labs located in more than 40 countries, which are coordinated by the Fab Lab Foundation.

The main factor that is actually limiting a wider diffusion of the Fab Lab concept is the lab set up cost.Footnote 2 Fabrication machines and materials are expen- sive and not all educational institutions, especially in primary and secondary education streams may afford the costs to start and especially maintain a Fab Lab. Surprisingly, all the research efforts put to date in the digital fabrication area have been aimed at demonstrating the effectiveness of Fab Labs in education [5] and at incorporating digital fabrication in the curricula [6,7,8]. However, to the best of authors knowledge no attempt has been made to address the challenges faced enhancing the Fab Lab functionality by providing support for pervasive and ubiquitous Internet access and resource sharing. That’s when the concept of Fabrication as a Service (FaaS) comes into play. FaaS has been introduced in [9] and is an architecture designed to enable remote access to Fab Labs as a Cloud-based service. This approach is a necessary evolution of Fab Labs, allowing them to become available to a wider community over the Internet.

As described in [9], the NEWTON Fab Lab platform relies on a loosely- coupled set of microservices running either on cloud or on the Fab Lab premises. These microservices implement: (1) the communication layer to interconnect all the networked Fab Labs, (2) the Fab Lab software abstraction layer, and (3) the fabrication machines software abstraction layer. Each microservice ex- poses a set of REST (REpresentational State Transfer) APIs (Application Programming Interface) used for system integration and for communication with third-party services and applications. These APIs enable the development of application and protocols to implement remote access and resource sharing of the underlying digitally-controlled hardware (i.e. the fabrication machines). The cloud infrastructure acts as the Hub node of a spoke-hub architecture where the interconnected Fab Labs represent the spoke nodes. The Fab Lab infrastructure can be accessed though a Fab Lab gateway that implements the Fab Lab abstraction layer as well as security and API requests rate-limiting policies. Each machine in a Fab Lab is wrapped by a software abstraction layer that provides mechanisms to monitor the machine status as well as the status of the queued jobs. The Hub node keeps a registry of all the interconnected Fab Labs. The registry includes information on Fab Lab location, infrastructure, bill of materials and fabrication machines’ load. The registry is updated in real-time using machine-to-machine communication protocols. The Cloud Hub acts also as a router that seamlessly relays the incoming fabrication requests to the Fab Lab that is geographically closer to requester’s location, has availability of fabrication resources and matches the machine and material types specified in the fabrication request.

In this paper we dive deeper into the FaaS concept and the design and de- velopment of the NEWTON Fab Lab platform by analyzing in detail the soft- ware and hardware architecture as well as the design tradeoffs. The manuscript is organized as follows: Section 2 describes the system architecture and the service integration into Amazon AWS (Amazon Web Services) infrastructure. Each of the three tiers (i.e. cloud hub, Fab Lab gateway and machine wrap- per) is analyzed in depth and a comprehensive description of all the software modules is provided. Section 3 reports the results of the tests performed to stress the platform performance, the measured data has been used to build a simple simulation model on top of CloudSim simulatorFootnote 3 in order to per- form a rough estimation of the system performance and to find possible system bottlenecks under realistic operating scenarios. In Section 4 we analyze the de- ployment costs of the architecture described in this paper whereas, in Section 5, we evaluate the educational impact of the designed platform and present the data collected and the result obtained during NEWTON small- and large-scale pilots. Finally, in Section 6 we summarize our achievements, draw up some conclusions and analyze possible related research topics and future develop- ments.

System architecture

Most of the digital fabrication machines used in a standard Fab Lab deployment are not open source, this means that hardware and software specifications are not available to developers and writing drivers and applications for that equipment entails a serious challenge to reverse-engineering the software in order to understand its behavior and write new open-source drivers and inter- faces. Another major design constraint to NEWTON Fab Lab is the lack of internet connectivity of the available fabrication machines. In order to over- come this limitation a hardware and software wrapper must be built on top of the fabrication equipment in order to provide the system with the capability to expose a Fab Lab to the internet as a web service. We call this hard- ware/software wrapper a Pi-wrapper since it is implemented on a Raspberry Pi embedded computing board. However, for security reasons, a machine is not directly exposed to the internet but lies behind a Fab Lab Gateway. The Fab Lab Gateway dynamically collects in real time the information from all the machine wrappers, builds a snapshot of all the services available in the Fab Lab and exposes them through a set of APIs that can be consumed by the Cloud Hub application.

The NEWTON Fab Lab architecture is a three-tier spoke-hub architecture in which the interconnected Fab Labs (i.e. the spokes) can communicate through a centralized hub located on cloud premises. The digital fabrication equipment of each Fab Lab is not directly exposed to the internet but can be accessed through a Fab Lab gateway that implement filtering and security policies. Finally, each digital fabrication machine has a software wrapper that exposes the underlying hardware though a set of REST APIs. Both the Fab Lab gateway and the machine wrapper are implemented using inexpensive off-the-shelf microcontroller boards. In our specific case, we use Raspberry PIs boards to implement the gateway and the machine wrappers; for this reason, we also refer to them as Pi-Gateway and Pi-Wrapper respectively. Fig. 1 depicts the simplified architecture of the NEWTON Fab Lab infrastructure. In order to allow inter-Fab Lab communication, each networked Fab Lab should have at least one public IP address Addr:ePort. The router/gateway maps the inbound traffic into a private address pAddr:pPort by means of a Network Address Table (NAT) and a Port Address Table (PAT). Similarly, the router performs the same task on the outbound traffic by forwarding it to the default gateway or by redirecting the requests for a private address to the private network. The message flow between the cloud application and the networked Fab Labs is managed by a cloud-deployed message broker that implements a publish/subscribe protocol. Spoke and hub nodes form a Virtual Private Net- work (VPN) in which the Fab lab gateway and the virtual machine instances on cloud premises communicate securely over the internet using private IP addresses though an IPSec (IP Secure) tunnel. IPsec is a suite of protocols for managing secure encrypted communications at the IP Packet Layer. The cloud and Fab Lab gateways are the tunnel endpoints deployed on local and cloud premises respectively.

Fig. 1
figure 1

NEWTON Fab Labs simplified system architecture

The cloud hub

The Cloud Hub is the centralized communication hub for all the networked NEWTON Fab Labs, tightly integrated into AWS (Amazon Web Services) web services infrastructure. More specifically, the cloud hub infrastructure requires the following AWS managed services:

  1. 1.

    Route 53 as the Domain Name Service (DNS).

  2. 2.

    S3 as the backend storage for the application cluster.

  3. 3.

    Internet Gateway to expose to the internet the underlying public infrastructure.

Figure 2 depicts the minimum infrastructure requirements for the cloud hub. The deployment requires five EC2 (Elastic Compute Cloud) instances. Two m3.medium instances are necessary to deploy the service networking infrastructure, whereas, three m4.large instances are necessary to deploy the cluster with the Platform as a Service Infrastructure (PaaS) to manage the Fab Lab cloud services. Digital fabrication services (i.e. the fabrication machines soft- ware wrappers and the underlying hardware) can be accessed through a set of REST APIs described in [10]. The cloud service networking infrastructure is formed by:

  • A VyOSFootnote 4 software-defined router to forward the incoming traffic from both the internet gateway and the IPSec tunnel to the service cluster in the private sub-network.

  • A reverse proxy to route the traffic forwarded by the VyOS router to the target service running on the service cluster.

Fig. 2
figure 2

Cloud Hub deployment on Amazon AWS infrastructure

The VyOS router is also used to manage the cloud end of the IPSec tunnel that connects the cloud hub to the Fab Labs network. Thus, the cloud hub and the interconnected Fab Labs form a unique VPN in which cloud and on- premise services communicate over an encrypted channel using private IPs.

The PaaS infrastructure is deployed on top of Flynn.Footnote 5 Flynn can be considered as a grid of Docker containers, rather than a traditional cluster. Each host will run containerized services and applications that can be deployed and scaled individually. Fig. 3 shows a simplified diagram depicting a Flynn grid deployment across a cluster of three hosts. Flynn architecture is split into two layers. Layer 0 provides basic services such as host management, service discovery and scheduling, whereas layer 1 implements the PaaS business logic (GitHub interface, Slug Builder, Slug Runner, etc.). Referring to Fig. 2, the layer 0 services are:

  1. 1.

    The Host Service (HS) that implements the interface between Flynn ser- vices and Docker. The Host Service is the only one that must run across all the Flynn hosts

  2. 2.

    The Scheduler (S). The scheduler distributes the containers among the instances given the current state of the grid and the resource allocation in each node.

Fig. 3
figure 3

Example of Flynn grid deployment across three hosts

The layer 1 services are:

  1. 1.

    The GitHub frontend (G). This module accepts Git connections through SSH and Git pushes; then deploys them in the Flynn grid.

  2. 2.

    The Controller (C) exposes APIs to control the whole infrastructure.

  3. 3.

    The Router (R) is a TCP/HTPP router/load balancer that distributes the incoming requests through the instances deployed in the Flynn grid. In order to implement a high-availability there must be several instances of this module across all the Flynn hosts.

  4. 4.

    The Slug Builder (SB) is a module that builds a slug starting from a Git push received by the Flynn Git front-end (G). A slug is a compressed and pre-packaged copy of an application optimized for distribution to the Flynn PaaS.

  5. 5.

    The Slug Runner (SR) is a module that allocates and instantiates several Docker containers (depending on the scaling parameters) to deploy and execute the code contained in a slug.

  6. 6.

    The Application (A) is a module that implements the application code (for example, the Cloud Hub and the Service Registry in our specific case).

The fab lab gateway

The Fab Lab gateway (i.e. the Pi-Gateway) is the entry point to the local network and to the digital fabrication infrastructure of a Fab Lab. Fig. 4 depicts the Pi-Gateway software architecture. The architecture is modular and distributed over four layers. The Communication Layer is a proxy server that implements the communication protocols between the cloud hub and the gateway (HTTP and HTTPS are both supported). The incoming requests are forwarded to the API Wrapper Layer that implements simple APIs to com- municate with the underlying Fab Lab infrastructures and a simple reactive websocket protocol to update in real-time the Fab Lab status in the cloud hub infrastructure. The proxy configuration is managed by a command line interface (CLI). Both the CLI and the API wrapper layer leverage the Middle- ware Layer functions to implement the business logic and the communication protocols. Middleware provides primitive functions to implement websocket communications, logging, process management (using programmatically the APIs provided by the PM2Footnote 6module), transactional e-mail (using an AWS Simple E-mail Service client) and persistence layer interfacing. Open API 2.0 (Swagger) support is also integrated in the middleware layer and APIs specifi- cations are described in [11]. Finally, the Data Layer (persistence layer) is used to store the proxy and the Fab Lab configurations. We use a NoSQL model and RedisFootnote 7 module) as the key-value store.

Fig. 4
figure 4

Fab Lab Gateway (Pi-Gateway) software architecture

The machine wrapper

The Machine Wrapper (i.e. the Pi-Wrapper) provides the connected machine with a software abstraction layer by exposing the machine functionalities through a set of APIs. Fig. 5 depicts the software architecture of the Pi- Wrapper. The software architecture is modular and distributed over five layers. The Communication Layer implements the HTTP server and the APIs interface to manage and schedule fabrication batches. The Presentation Layer implements the user interfaces to set up and manage a connected fabrication machine. An MVC (Model View Controller) programming paradigm is used at this stage; namely, a route in the browser triggers a controller function that dynamically generates and renders an HTML view using the data stored in the persistence layer (i.e. data base). The Application Layer implements the business logic. The business logic and the user interface rely on the middle- ware functions implemented in the Middleware Layer. More specifically, the middleware includes custom and third-party methods to manage security and authentication, machine to machine communications and interfacing, HTML views rendering, system logging, data base connection and access, and ADC (Analog to Digital Converter) drivers to sample data from the machine moni- toring circuit as described in [9]. Open API 2.0 (Swagger) support is integrated in the application middle- ware, this makes the Pi-Wrapper a very developer- friendly software since APIs and data models documentation is embedded into the application, in addition a developer can test the API using the Swagger User Interface that is also embedded in the Pi-Wrapper. Swagger Pi-Wrapper API specifications are described in [12]. Finally, the Data Layer is used to store session information as well as User and Machine data models. We use a NoSQL model and MongoDBFootnote 8 as the data store.

Fig. 5
figure 5

Machine wrapper (Pi-Wrapper) software architecture

Machine to machine communication

The communications between client applications and the remote NEWTON Fab Labs rely on a protocol stack which includes a simple publish/subscribe protocol. The fabrication equipment is accessed through the Fab Lab Gate- way that routes incoming commands to a given machine depending on both availability and the specific task to be carried on. The communication protocol relies on a server-to-server model in which some nodes act as message brokers collecting the incoming messages and re- laying them towards a destination node. A fabrication job is routed to a networked Fab Lab by the Cloud Hub message broker; however, the message broker on the cloud side has not direct visibility of the Fab Lab network infrastructure. Its main task is to connect a client to the Fab Lab infrastructure or to perform inter-Fab Labs message routing. The networked machines in a Fab Lab can be accessed through the Fab Lab Gateway only. The gateway main task is routing the outbound traffic to the networked equipment and managing intra-Fab Lab communications. Fig. 6 presents a simplified timing diagram that describes the communication between the cloud infrastructure and a networked Fab Lab. The message exchange has four stages:

  1. 1.

    link establishment;

  2. 2.

    topic subscription;

  3. 3.

    communication;

  4. 4.

    disconnection (not illustrated for the sake of simplicity).

Fig. 6
figure 6

Overview of the Inter- and Intra-Fab Lab Messaging Flow

Once the TCP links between the machine and the Fab Lab Gateway on one side, and the Fab Lab Gateway and the Cloud hub broker on the other side, have been established, both the Gateway and the Hub subscribe to topics they are interested in. The topic string is generated using the unique name and connection ID sent by the server that initiates the communications to the destination server during the link establishment. Both the link establishment and the subscription phases are terminated by an ACK message (Init ACK for the link establishment and Subscription ACK for the subscription phase). In other word, the Fab Lab Gateway and the Cloud Hub implement a double broker architecture: the former collects all the incoming messages from the Fab Lab machines whereas the latter collects all the incoming messages from the networked Fab Lab Gateways. The double broker architecture allows the implementation of Fab Lab access and security policies and of custom mes- sage filters mechanisms. Once the subscription phase has terminated, the end nodes start exchanging messages. Each published message can be acknowledged by an optional Publication ACK message. The use of a Publication ACK is mandatory in those cases when it is necessary to guarantee the delivery of a message and to implement retransmission mechanisms to increase the QoS of the protocol.

Test, modelling and simulation

The system infrastructure has been tested in real scenarios through small-scale pilots that have involved the participation of six schools and universities lo- cated in three European countries as part of the EU-funded NEWTON project. The test pilots have been used to stress the system infrastructure and evaluate the performance of the proposed algorithms for task scheduling and fabrica- tion resources allocation. In order to detect system peak performance, system infrastructure and APIs have been also load tested using Locust.Footnote 9 Locust allows to simulate user behavior using a Python script. We have designed a set of simple use cases that stresses all the Fab Lab APIs and provides a unified picture of the system performances.

The test scenario implements the use cases described in Table 1. These use cases have been translated into a Python script that is parsed by Locust in order to generate the requests for the infrastructure under test. Locust can be further configured so that the user behaviour described in that script can be associated to an arbitrary number of virtual users in order to stress the system response under different load conditions.

Table 1 Fab Lab modules test cases

Load tests

The Fab Lab infrastructure described in the previous sections has been load tested in the following emulated scenarios:

  1. 1.

    50 concurrent users with a hatch rate of 5 users per second.

  2. 2.

    100 concurrent users with a hatch rate of 5 users per second.

  3. 3.

    150 concurrent users with a hatch rate of 5 users per second.

All the incoming requests are forwarded to the same fabrication machine, each test has a duration of 2 minutes and, as mentioned before, each simulated user performs the operations described in Table 1 which means that the following HTTP requests are sent to the Fab Lab APIs:

  1. 1.

    GET the available Fab Lab status.

  2. 2.

    POST a job to the available Fab Lab.

  3. 3.

    GET the status information of the submitted job.

  4. 4.

    DELETE the submitted job.

  5. 5.

    GET the information of the jobs running in the available Fab Lab.

The most time-consuming operation is the POST request to submit a fabrication job since it involves the following steps:

  1. 1.

    Uploading the image on the cloud hub.

  2. 2.

    Sending the image to the Fab Lab Gateway.

  3. 3.

    Sending the image to the target fabrication machine.

  4. 4.

    Update the jobs queue in the fabrication machine.

Fig 7 shows the load tests results for the three scenarios under test (i.e., the cases with 50, 100 and 150 concurrent users respectively). Fig. 7 a summarizes the overall results for all the request types, whereas Fig. 7 b depicts the results only for POST requests. Test results are excellent, considering the Fab Lab infrastructure has been deployed on inexpensive Raspberry Pi III boards. For example, the 90% of the incoming requests are served in maximum 680 ms for 50-user scenario, 1100 ms for the 100-user scenario, and 5100 ms for 150-user scenario. Of course, as outlined earlier in this section, the most time-consuming operations are the POST requests whose delay can be as high as 9141 ms in the case of 150 concurrent users. An overview of the measurements performed using Locust is summarized in Tables 2, 3 and 4. The tables report the median, minimum, maximum and average response time in milliseconds for each one of the API called by our simulated scenario for all the test cases studied (namely for the 50-, 100- and 150-user load respectively). The measured values confirm the excellent performance already outlined by Fig. 6. The total average response times for the 50-, 100- and 150-user test cases are 452 ms, 568 ms and 1680 ms respectively, whereas the maximum average response times are 801 ms, 1158 ms and 3883 ms respectively. An average response time of 3883 ms is acceptable and, according to Fig. 7 a allows, on the average, the completion of the 100% of the requests for the 50-user scenario, the 99% of the requests for the 100-user scenario and almost the 80% of the total requests for the 150-user scenario.

Fig. 7
figure 7

Percentage of Requests Completed in a Given Time Interval a Total Requests, and b POST Requests

Table 2 Summary of System Performance for 50 Users Load (values are in ms)
Table 3 Summary of system performance for 100 Users Load (values are in ms)
Table 4 Summary of System Performance for 150 Users Load (values are in ms)

Platform modelling

The system stressed by the load tests described in Section 3.1 is a minimum deployment formed by the Cloud Hub located in the eu-central-1 AWS region (i.e., in the Amazon AWS data center in Frankfurt) and a single spoke node (i.e., the San Pablo-CEU Fab Lab located in Madrid). Thus, in order to estimate the performances of larger deployments across several AWS regions, a simulation model is necessary. The cloud infrastructure under test, depicted in Fig. 8, is very complex and requires up to six levels of AWS services (Route 53, Elastic Load Balancing, Autoscaling, EC2 instances, S3 storage and optionally, Cloudfront CDN services). This, in turn entails several challenges tied to infrastructure and application setup, administration, and behaviour predictability. On one hand, the promise of scalability, redundancy and on-demand service deployment makes a cloud implementation a very appealing solution. On the other hand, all these advantages come at the price of several issues that can make cloud application development and management a challenging task. More specifically, the issues with cloud deployment are related to the following impact factors:

  • Performance: Disk IO operations can be a serious issue and limit the performance of a cloud deployment. In a cloud infrastructure, the network and the underlying storage are shared among customers. If, for example, another customer sends large amounts of write requests to the cloud stor- age system, your application may experience slowdowns and its latency becomes unpredictable. Moreover, also the upstream network is shared among customer, so one can experience bottlenecks there too. Unluckily, cloud vendors use to offer to their customers large storage, but not fast storage.

  • Transparency: Transparency and simplicity are key factors when debug- ging either an application or an infrastructure. Unfortunately, cloud ser- vices are, in many cases very opaque and tend to hide underlying hardware and network problems. Cloud infrastructure is a shared service, and, for this reason, cloud users may experience issues that do not occur in dedi- cated infrastructure. More specifically, cloud infrastructure customers, share hardware resources such as CPU, RAM, disk and network, thus the workload of other users can saturate a computing node and heavily affect the performance of your application.

  • Complexity and scalability: Fig. 8 gives an idea of the complexity of the cloud architecture that has been deployed to ensure NEWTON Fab Labs connectivity. This entails the interaction of up to six different AWS service layers that require expertise for set-up and configuration. Moreover, Elastic Load Balancing and scalability are not straightforward in AWS and require the deployment and configuration of additional services (namely, CloudWatch and CloudFormation) that incur extra costs and complexity.

Fig. 8
figure 8

NEWTON Fab Labs global infrastructure deployment

Finally, as mentioned in Section 2.1, we have deployed a PaaS (Platform as a Service) infrastructure on top of the cloud infrastructure depicted in Fig 8. The PaaS simplifies application and service deployment in a cloud environment but adds other software layers and additional complexity to the underlying infrastructure, making the application behaviour even more unpredictable. In order to build a simulation model as close as possible to the real behaviour of the cloud infrastructure, we have followed the steps reported in the sequel:

  1. 1.

    We have instrumented the Cloud Hub server in order to measure the server latency to process an incoming request.

  2. 2.

    We have developed a fake client that performs fabrication requests at ran- dom times and have measured the elapsed times from request arrival to request dispatch to the selected Fab Lab. This time represents the server latency that is necessary to serve a request.

  3. 3.

    We have performed latency measurements for several server configurations, scaling the number of containers allocated to the database and to the Cloud Hub application.

  4. 4.

    We have used the measured data to build a simple regression model to predict the server latency as a function of the incoming requests and of the number of allocated containers.

  5. 5.

    We have deployed a test infrastructure across several AWS data centers in order to ensure the maximum geographic coverage as depicted in Fig. 8. The Fab Lab network implements a spoke-hub architecture in which each spoke relies on the Registry Server of the Cloud Hub for service detection and trac routing.

  6. 6.

    We have performed several measurements on the cloud infrastructure in order to determine latency and bandwidth across the networked Data Cen- ters.

  7. 7.

    We have used RIPE AtlasFootnote 10 data to build a latency and bandwidth model for the connections among a client and a Data Center and a Data Cen- ter and the target Fab Lab for each geographic region covered by AWS infrastructure.

  8. 8.

    We have used the experimental data gathered in Steps (6, 7) and the simple predictive model developed in Step (4) to build a delay model for the NEWTON Fab Lab infrastructure.

  9. 9.

    We have built an ad-hoc simulator on top of CloudSim [13] to simulate the behavior and the performance of the NEWTON Fab Labs network under different load conditions and using the delay model implemented at step (8).

Cloud hub delay estimation

The Cloud hub server has been instrumented in order to capture the the incom- ing POST requests and to measure the time elapsed from the request arrival and its subsequent forwarding to the selected Fab Lab. The measurements have been performed for several requesting users and server configurations. For each simulation set up the measurements have been performed 10 times at random intervals. We assume that the number n of requesting users is a power of 2 with 1 ≤ n ≤ 64 and that the number c of Docker containers allocated to the Cloud hub server is also a power of 2 with 1 ≤ c ≤ 8. For each configuration under test we compute the mean, the median, the standard deviation and the geometric mean of the measured latencies. The measurements are reported in Tables 5, 6, 7 and 8. Tables 5, 6, 7 and 8 summarize the statistical distributions of the measured delays for several application deployments. As also observed in [9], the measured values exhibit a high standard deviation. Moreover, observing the minimum, the median and the maximum values, one can infer that the measured latencies have a tail distribution (either lognormal or Pareto). This tail behaviour, as reported in [14], is typical for networked and internet appli- cations. More specifically, we have found that the distribution of the measured values, whose statistical behaviour is summarized in Tables 5, 6, 7 and 8, matches a Pareto type I distribution.Footnote 11 Due to the high dispersion of the measured data, the mean values are not meaningful and may lead to wrong conclusions since the arithmetic mean is heavily affected by the outliers. A more objec- tive analysis must rely on the minimum and median values of the latency as well as on its geometric mean since, unlike arithmetic mean, it is less sensitive to the effect of the outliers. Analyzing Tables 5, 6, 7 and 8 as a whole, one could easily observe that the minimum, the median and the geometric mean of the measured delays decrease as expected (with some outliers) as the number of allocated containers scales up. However, this is not the case for the maximum delays. As mentioned before, Downey [14] showed that this high variability is very typical in internet applications. In our specific case, the high dispersion of the measured values is due to the unpredictable latency introduced by the cloud infrastructure. As pinpointed in Section 3.2, a cloud deployment has some drawbacks that arise from the fact that several customers are sharing the same virtualized hardware and network infrastructure. Consequently, the performance of a cloud application is heavily affected by the other customers’ application that are loading the underlying infrastructure at the same time. We have deliberately performed our measurements at random times to trigger this variable behavior and the effect of the other AWS customers application load on the performance of our platform. To this latency, we should also add the latency introduced by the virtual networking routing infrastructure de- ployed by Flynn. However, recall that the impact of the maximum delay on the overall performance is minimum; since, in a tail distribution the probability of a high delay is very low.

Table 5 Cloud Hub latency (in ms) with one container allocated to the appli- cation
Table 6 Cloud Hub latency (in ms) with two containers allocated to the application
Table 7 Cloud Hub latency (in ms) with four containers allocated to the application
Table 8 Cloud Hub latency (in ms) with eight containers allocated to the application

Communication latency and bandwidth

In order to build a realistic simulation model, we need to estimate communica- tion latency and bandwidth among the nodes that form the Fab Lab network as well as the maximum concurrency level that each node can support. This goal is accomplished through the following steps:

  • We estimate the network latency Lcj from client to Data center j and Lfj Fab Lab to Data Center j in the same AWS region. To do this, we use the real measurements provided by RIPE Atlas network. RIPE Atlas is a public network located in the last mile and formed by more than 16.000 measurement probes capable of measuring connectivity between internet endpoints on demand.

  • We estimate the network uplink and downlink bandwidth between client and Data Center j (Buplink,cj and Bdownlink,cj respectively) and Fab Lab and Data Center j (Buplink,fj and Bdownlink,fj respectively) in the same AWS region. To do this, we use the Clouharmony speed test service.Footnote 12 How- ever, this service allows measuring the desired parameters only between the client browser and the target Data Center. This means, that we are able to track performance only within Europe and must make the simplifying assumption that the network performances within the same AWS region are approximately the same using the measurements performed in Europe as the reference values.

  • We measure the Data center i to Data center j network latency Lij using ping and traceroute. Traceroute is even better than ping since it allows testing the response time of each network segment along the path. There- fore, this tool can not only measure but also locate the latency across the routers that form the packets path.

  • We measure the Data center i to Data center j uplink and downlink band- with (Buplink,ij and Bdownlink,ij respectively) using iPerf3 tool.

The delay D of the system response after a fabrication (POST) request has been issued is computed as follows:

$$ {\displaystyle \begin{array}{l}D\kern0.5em =\kern0.5em {L}_{cj}+\kern0.5em {t}_{uplink, cj\kern0.5em }+\kern0.5em {L}_{jk\kern0.5em }+\kern0.5em {t}_{uplink, jk\kern0.5em }+\kern0.5em {L}_{kj\kern0.5em }+\kern0.5em {t}_{uplink, kj}\\ {}\kern4.5em +\kern0.5em {L}_{jf\kern0.5em }+\kern0.5em {t}_{uplink, cj\kern0.5em }+\kern0.5em {L}_{fj\kern0.5em }+{t}_{uplink, fj\kern0.5em }+\kern0.5em {L}_{jc\kern0.5em }+\kern0.5em {t}_{uplink, jc}\end{array}} $$
(1)

where j denotes a Datacenter located in a spoke node, whereas k denotes the Data center located in the hub node. The delay D of a response is hence the packet round-trip time necessary to follow the path the goes from the client to the spoke node, from the spoke to the hub node and then to the spoke again, from the spoke to the selected fab lab and then to the spoke again, and finally to the client. Observe that the data transfer time tij between nodes i and j in Equation 1 is computed as:

$$ {t}_{ij\kern0.5em }=\kern0.5em \frac{S_{ij}}{B_{ij}} $$
(2)

where Sij and Bij represent, respectively, the number of bytes transmitted and the measured bandwidth between nodes i and j. Table 9 summarizes the average latencies measured from client to Data center from different world regions.

Table 9 Summary of the latencies (in ms) of client-to-Data center connection

Table 10 reports the uplink and downlink bandwidth between a client and a Data center located in the same AWS region. More specifically, these mea- surements refer to a client and a Data center located in Europe since, as we pointed out earlier, the Cloudharmony speed test service only allows perform- ing measurements from the client browser to the target Data center. We will use the values of Table 9 as reference for all the AWS supported region that form the NEWTON Fab Lab network architecture.

Table 10 AWS uplink and downlink bandwidths (Mb/s)

Table 11 reports the Data-center-to-Data-center latency. For each possible connection, we report minimum, average and maximum latency as well as the standard deviation with respect the average latency.

Table 11 Summary of the hub-to-spoke latencies (ms)

Finally, Table 12 summarizes the measured uplink and downlink band- widths for the Data center to Data center connections.

Table 12 Summary of the Inter-Data center uplink and downlink bandwidths (Mb/s)

Concurrency level

We use Apache BenchmarkFootnote 13 to estimate the maximum concurrency level that can be effectively borne by a node of the NEWTON cloud infrastructure. This allows us to estimate the maximum number of concurrent requests that can be served by the cloud infrastructure and to configure suitably the simulator that models NEWTON Fab Lab infrastructure. The Cloud Hub APIs provide a root (/) endpoint that supports both HTTP and HTTPS protocols and returns a response with a 200-status code and a body with an empty JSON (JavaScript Object Notation) object. We use this endpoint to ping the Cloud Hub sever; however, we can also use the same endpoint to perform simple load tests on our infrastructure. Nonetheless, you have to keep in mind that the result obtained in this way are optimistic since the authentication server and the underlying data base are not stressed. Although Apache Benchmark tool generates very detailed reports, we are only interested in detecting which is the maximum number of concurrent requests that breaks our server leading to a timeout error. In order to do this, we stress our server during a prolonged period with an increasing number of concurrent requests until it breaks. Table 13 summarizes the percentiles measured when a minimum cloud deployment (with only one container allocated to the cloud hub application) is stressed by 20.000 requests with concurrency levels 10, 50 and 100 respectively. Observing the percentiles of the measurements, we note that in all the scenarios under test the response delays exhibit a tail distribution. In addition, increasing the concurrency level of the incoming requests leads to larger tail delays, being 100 the maximum concurrency level that can be supported by the cloud configuration under test. However, the measurements carried out are qualitative and are only useful to set-up our simulation model with reasonable values. In fact, the measurements performed have been carried out just for a short period of time, thus they do not consider the delay variability of the cloud infrastructure as pointed out previously. Moreover, the measured times refer to the response latency for a simple API endpoint that returns a 200-OK response; consequently, they do not consider the extra latency to access to the underlying data base to retrieve the Fab Lab information. For all the aforementioned reasons, it seems reasonable to assume that, in a real deployment, the Fab Lab infrastructure can support without problems up to 50 concurrent accesses and manage approximately 1000 requests per second (by scaling up the number of containers allocated to the cloud application).

Table 13 Summary of the response times (in ms) for 20.000 incoming requests

Simulator implementation and simulation results

The measurements performed on the Cloud Hub infrastructure reported in Tables 5 to 8 show, as expected, non-normal distribution of the measured data that seems loosely correlated to the number of requests and the number of containers allocated to the application, which makes very difficult to make reliable predictions of the Cloud Hub application latency. Lognormal and Pareto distributions are those that better model server response time [14]. For this reason, the proposed prediction scheme does not predict the latency of the Cloud Hub application; this would make no sense, since, as stated before in a cloud environment several customers share the same network and infrastructure which makes very hard to predict the server behaviour in a given instant. What we do instead, is using the measured data to predict the shape of a type

I Pareto distribution that models the performance of our cloud infrastructure under different load conditions and number of allocated containers. We then use the prediction to generate, in our simulator, a random latency X(r, n) that is a function of the number r of incoming requests and of the number n of allocated containers, with that Pareto distribution starting from a uniform random variable U (0, 1) using the following equation:

$$ X\left(r,n\right)=\frac{{\hat{x}}_i\left(r,n\right)}{{\left(1-U\right)}^{1/\hat{\alpha}\left(r,n\right)}} $$
(3)

Where βˆ(r, n) = xˆi(r, n) is the prediction of the Pareto distribution scaleparameter and αˆ(r, n) is the prediction of the Pareto distribution shape parameter. Both βˆ and αˆ are computed using a simple regression model as a function of r and n. The simulation software has been built according to the following hypothesis:

  1. 1.

    The CPU load of each instance of the cluster must not exceed the 50%.

  2. 2.

    The requests are evenly distributed among the cluster instances.

  3. 3.

    The incoming requests are evenly distributed within a given instance among blocks of 8 Docker containers, being 64 the scaling threshold.Footnote 14

  4. 4.

    The cluster minimum configuration can manage up to 50 concurrent ac- cesses.

The following pseudo-code snippet describes the block allocation and latency estimation process implemented by our simulator:

figure a

The algorithm estimates the delay of the infrastructure response and follows the steps described next. First an array to hold the estimations of the response delay is initialized (line 1). Afterwards, the number of incoming requests is computed and the number of containers necessary to manage all the incoming requests is allocated in each of the virtual machines that form the cluster (lines 3 to 7). Then, the number of requests that must be forwarded to each allocated block of containers is computed (line 8). After that, for each allocated block, the shape of the Pareto distribution of the possible delays is computed (lines 9 to 13). Recall that, as stated before, the Pareto distribution shape and scale parameters are computed by performing a multivariate linear regression on the measured data whose statistics are summarized in Tables 5, 6, 7 and 8. Finally, the values of the shape and scale parameters for the given number of requests and allocated containers are used to estimate the system response latency using Equation 3.

Thus, our simulator relies on the measurements reported in Sections 3.3 to 3.5 to build a network bandwidth and latency model and on Equation 3 to estimate the delay of the spoke and hub nodes taking into account the variabil- ity introduced by the cloud shared infrastructure. The overall system delay, i.e. the packet round-trip time from a fabrication request issued by a client until the system acknowledge is computed using Equation 1. Experiments have been designed to analyze the behaviour of the NEWTON Fab Lab infrastructure with the following users’ distribution: 250, 500, 1000, and 1500. Each user can issue from one to five requests; moreover, for each load configuration, the number of containers allocated to the application will scale as multiples of 8 from 8 to 128 (for 16 possible configurations). Finally, the simulated infrastructure must cover requests from four AWS availability zones (Europe, North and Central America, South America and Asia-Pacific) in order to ensure a globally optimal service to all the world regions. Table 14 summarizes the experiments configurations. The variable simulation parameters are the num- ber of users, the number of requests per user, and the number of containers allocated to each instance of the cluster. All the other parameters are fixed. This means that for each possible user configuration 80 experiments must be performed (i.e. the number of requests per user times the number of possible containers configurations). For the sake of simplicity, we also assume a uniform user distribution among different AWS regions.

Table 14 Experiments conguration

The scaling threshold is set to 1024 requests, i.e. the request count per tar- get of each Elastic Load Balancing (ELB) target group must be kept as close as 1024 for the Autoscaling group.Footnote 15 More specifically, assume that you have configured an Autoscaling group with a minimum of three instances (i.e. the minimum PaaS cluster configuration) and a maximum of six instances within an ELB group of a given AWS region. Setting a threshold of 1024 means that each instance of your cluster should receive approximately 1024 requests. If the overall number of incoming requests is larger, the number of instances should be scaled up to match the target threshold as close as possible. For example, if the cluster has three instances and the number of incoming requests is, say 3800, the system should scale up by one instance (i.e. from three to four), so that each instance handles 3800/4 = 950 requests. Finally, note that with the simulation set up depicted in Table 14, the maximum number of incoming requests from a given region do not exceed 5000; thus, with a threshold of 1024 it is not necessary to have more than five virtual machines in the Autoscaling group. Prior to running all the experiments, we have to make sure that the mathematical model we have developed for the cloud application behaves as expected. To do this we simply check that the simulated latency of the NEWTON cloud infrastructure matches a Pareto distribution. After running all the simulations whose set-up is detailed in Table 14 we obtain the Pareto-like distribution of the response latency depicted in Fig. 9. Recall that, as detailed in Table 14, our simulation scenario assumes fabrication requests with a 5 MB attachment (since this is the typical image le size of a design submitted for fabrication). In addition, we have also assumed that the users (and hence the service requests) are evenly distributed among all the Data centers that form the NEWTON Fab Lab cloud infrastructure.

Fig. 9
figure 9

Distribution of the response latency for NEWTON Fab Lab infrastructure

Table 15 represents the percentiles for the distribution of Fig. 9. Observe that 50% of the requests are served within 8000 ms and 99% of the requests within 38.000 ms, being 49.000 ms the worst-case simulated delay. These are indeed excellent results considering that:

  • As highlighted earlier in this paper, cloud infrastructure is shared among many customers, leading to very variable delays.

  • The simulated latency also includes the transmission time of the design le (assumed to be 5 MB) attached to a request (that must go from the client to the spoke or hub node of NEWTON infrastructure and finally to the target Fab Lab).

  • In the worst-case scenario the communication delay depends on the follow- ing path: client - spoke - hub - spoke - Fab Lab - spoke - client. Thus, the latency of a response can be very high due to the communication overhead introduced by each node in the communication path.

Table 15 Percentile table of the simulated NEWTON Fab Lab cloud infrastructure latency

After running the set of experiments described in in Table 14, for each Data center in the network, we obtain the performance estimations summarized from Table 16, 17, 18, 19, 20 and 21. For each simulation scenario and Data center, we report minimum, maximum, average, median and standard deviation of the simulated latency.

Table 16 Summary of the Data centers performance (250 users scenario)
Table 17 Summary of the Data centers performance (500 users scenario)
Table 18 Summary of the Data centers performance (750 users scenario)
Table 19 Summary of the Data centers performance (1000 users scenario)
Table 20 Summary of the Data centers performance (1250 users scenario)
Table 21 Summary of the Data centers performance (1500 users scenario)

Observing the simulation results we can easily infer that the cloud system infrastructure behaves as expected since:

  1. 1.

    The response delay increases with the number of requests.

  2. 2.

    The Europe Data center is the one that exhibits the longest delays because it is the hub of our infrastructure and must always process all the incoming requests.

  1. 3.

    The Data center latency exhibits a high variability, which reflects the performance fluctuations of the cloud infrastructure due to resource and network sharing with other customers as outlined previously.

  2. 4.

    The response latency exhibits a Pareto-like distribution, which is typical of internet networked systems.

Infrastructure costs

The NEWTON infrastructure must comprise four Data Centers to ensure max- imum coverage in all the AWS supported regions. The Data Centers implement a spoke-hub architecture being the Frankfurt node (eu-central-1 AWS region) the hub. Spokes must be located in United States (eu-east-1 AWS region), South America (sa-east-1 AWS region) and Singapore (ap-southeast-1 AWS region). The main infrastructure and application (i.e. the registry service, the Fab Lab monitoring service, the Fab Lab connection/routing service) is hosted on the hub, whereas the spokes only run a simple client to query the service registry and the router. With this approach we limit the more expensive vir- tual machines (i.e. the m4.large instances) to the network hub, whereas the spokes may rely on cheaper virtual machines (i.e. t2.micro instances).

In its minimum configuration, the NEWTON cloud infrastructure relies on the following Amazon AWS services:

  • Between five and eight Elastic Cloud Computing (EC2) instances.

  • Between five and eight EBS volumes allocated for each EC2 instance.

  • Route53 DNS service.

  • S3 storage to implement the blobstore for the PaaS infrastructure.

  • Optionally, the CloudFront content delivery network (CDN) service.

The EC2 instances that form the PaaS infrastructure are configured to be autoscaled, according to the platform load, between three and five instances. This, in turn, requires setting-up other two AWS services:

  1. 1.

    CloudWatch to monitor platform metrics and trigger the autoscaling.

  2. 2.

    CloudFormation, to dynamically build and deploy new instances of the PaaS platform.

CloudWatch has a free tier. Each month, AWS customers receive 10 met- rics (applicable to detailed monitoring for Amazon EC2 instances or custom metrics), 10 alarms, 5 GB of log size, 5 GB of archived log size, 3 dash- boards and 1 million API requests at no charge. This should be sufficient for NEWTON cloud infrastructure to operate safely without incurring extra costs. Conversely, CloudFormation is a free service.

Table 22 summarizes the running costs (VAT not included) of the hub node of the Fab Lab cloud infrastructure. Amazon AWS also offers to its cus- tomers dedicated instances and dedicated hosts. These solutions isolate your infrastructure from that of the other customers, leading to a more stable and controllable behaviour. Deploying a dedicated instance on AWS will incur an additional cost of $2 /h. This means that the monthly running costs of an EC2 instance will increase by $1440 if we want that instance to be dedicated. Conversely, the monthly cost of a dedicated host of m4 type in the eu-central-1 region (Frankfurt) is $2366,09. The spoke node infrastructure is very simple and is formed by one to three autoscaled t2.micro EC2 instances. This in- frastructure must be deployed in all the spoke nodes of the NEWTON Fab Lab network: eu-east-1 (N. Virginia), sa-east-1 (Sao Paulo) and ap-southeast-1 (Singapore).

Table 22 NEWTON Fab Labs hub node monthly running costs on AWS infrastructure

Tables 23, 24 and 25 report the running costs of the infrastructure for each one of the AWS regions in which the spoke nodes must be deployed.

Table 23 NEWTON Fab Labs eu-east-1 spoke node monthly running costs on AWS infrastructure
Table 24 NEWTON Fab Labs sa-east-1 spoke node monthly running costs on AWS infrastructure
Table 25 NEWTON Fab Labs ap-southeast-1 spoke node monthly running costs on AWS infrastructure

Finally, Table 26 summarizes the overall monthly costs necessary to run the whole NEWTON Fab Lab infrastructure. Thus, the infrastructure running costs of a minimum deployment may vary between $1386,33 and $1811,89 per month (VAT not included).

Table 26 NEWTON Fab Labs cloud infrastructure overall monthly running costs

Fab labs impact in education

The NEWTON project Fab Labs, as small workshops offering flexible remote digital fabrication, were tested in an educational context. The goal of these tests was to establish the degree of success of the proposed new learning paradigm learning by doing in terms of both student learning outcome, and most importantly their degree of satisfaction. Students from two schools: Saint Patricks Boys National School in Dublin, Ireland and CEU Monteprincipe School in Madrid, Spain were exposed to NEWTON Fab Labs as part of the NEWTON education initiative. The 39 students, aged between 10 and 13, were asked to model 3D ceramic vases using a third-party design software, prepare the digital files and send them over the Internet to the Fab Lab be printed. Following the usage of the NEWTON Fab Lab technology, the students were asked to fill a usability questionnaire. Fig 10 illustrates the average scores obtained after processing the results of the questionnaire. 87% of the participants from both schools reported that they had fun using the NEWTON Fab Lab technologies and indicated that they would recommend Fab Lab solutions to their friends. This is a great outcome and demonstrates how Fab Lab can have a highly positive impact on student increased satisfaction while learning. Future work will present in details the results of the deployment of Fab Lab in education.

Fig. 10
figure 10

Average scores for the Fab Lab usability questionnaire

Conclusions

FaaS Fab Lab deployment has been performed as part of the NEWTON plat- form. The platform is now in production phase and includes the cloud hub (deployed on an Amazon AWS EC2 cluster) and the on-premises interface in- frastructure (implemented with inexpensive Raspberry Pi III boards) that has been deployed and is presently under test at CEU Madrid, Spain. This deployment has helped gaining significant insights on several design and implementation aspects and trade-offs that include hardware design and interfacing, system monitoring and cloud deployment, data security as well as service deployment and orchestration in a multi-cloud environment. Several architectural aspects and implementations have been evaluated and tested so far, with particular emphasis on:

  1. 1.

    system replicability and scalability;

  2. 2.

    system costs and maintainability;

  3. 3.

    service availability and auto-discovery in multi-cloud environments;

  4. 4.

    API architecture and design;

  5. 5.

    functional and load tests design.

The next step is setting-up the system staging environment that involves networking and interfacing to the cloud hub the Fab Labs at CEU Madrid and Vrije University of Brussels, Belgium. This will enable testing the sys- tem in a distributed, yet still controlled environment. FaaS enhances existing Fab Lab capabilities by providing the digital fabrication equipment with the possibility to communicate over the Internet in order to remotely control fabrication activities. Using this approach, the fabrication facilities are exposed to the Internet as software services, which may be consumed by third-party applications. FaaS practical deployment strongly relies on IoT and Cloud architectural and software paradigms and requires design and development of specific hardware and software interfaces that allow pervasive connectivity. The hardware interface design was not difficult and has been accomplished by using standard and inexpensive off-the-shelf components. Conversely, firmware and software development were highly challenging and has involved solving several complex problems related to equipment monitoring and real time communications. The paper describes FaaS deployment in the context of NEWTON next generation Fab Labs; however, the proposed solution is general, hardware-independent and targets all those scenarios which involve collaborative fabrications. We foresee that this capability will have a huge impact not only on education, but also on industry helping to develop new business models in which fab-less companies may schedule medium or large-scale fabrication batches hiring third-party remote fabrication services.

Availability of data and materials

The NEWTON Fab Lab modules are available with MIT license through the NEWTON Fab Lab project page on Git Hub at https://gcornetta.github.io/gwWrapper/. The experimental data is available at https://github.com/ gcornetta/data and it is licensed under Creative Common 4.0 BY-NC-SA.

Notes

  1. “National Curriculum in England: Design and Technology Programmes Study”, UK Department of Education, 2013, https://www.gov.uk/government/publications/national-curriculum-in-england-design-and-technology-programmes-of-study

  2. The minimum deployment costs of a Fab Lab compliant with the Fab Foundation (https://www.fabfoundation.org/) specifications can be as high as 200.000 [$]

  3. http://www.cloudbus.org/cloudsim/

  4. https://vyos.io

  5. https://flynn.io

  6. https://keymetrics.io/pm2/

  7. https://redis.io

  8. https://www.mongodb.com/

  9. Project Website: https://locust.io

  10. https://atlas.ripe.net

  11. The experimental data has been open-sourced and is available at https://github.com/gcornetta/data

  12. https://cloudharmony.com/speedtest

  13. https://httpd.apache.org/docs/2.4/programs/ab.html

  14. This design choice is due to the fact that our simple prediction functions are defined for 1 ≤ r ≤ 64 requests and 1 ≤ n ≤ 8 containers. Also, consider that Flynn does not natively support the container autoscaling feature implemented by our simulator. In order to enable container autoscaling you should use other container orchestration platforms such as DC/OS or Rancher instead of Flynn, provided you may afford higher deployment costs.

  15. Please note that in a real (i.e. not simulated) AWS deployment you need to enable the CloudWatch service to measure the metrics necessary to trigger autoscaling and the CloudFormation service to create and deploy an instance of the PaaS cluster node.

Abbreviations

ADC:

Analog to Digital Converter

API:

Application Programming Interface

AWS:

Amazon Web Services

CDN:

Content Delivery Network

CLI:

Command Line Interface

CNC:

Computer Numerically-Controlled

DNS:

Domain Name Service

EC2:

Elastic Compute Cloud

ELB:

Elastic Load Balancing

FaaS:

Fabrication as a Service

HS:

Host Service

HTTP:

HyperText Transfer Protocol

HTTPS:

HTTP Secure

IoT:

Internet of the Things

IPSec:

IP Secure

JSON:

JavaScript Object Notation

JWT:

JSON Web Token

MVC:

Model View Controller

NAT:

Network Address Table

PaaS:

Platform as a Service

PAT:

Port Address Table

QoS:

Quality of Service

RAM:

Random Access Memory

REST:

REpresentational State Transfer

SB:

Slug Builder

SOA:

Service Oriented Architecture

SR:

Slug Runner

STEM:

Science, Technology, Engineering and Mathemathics

TCP:

Transmission Control Protocol

TEL:

Technology Enhanced Learning

VPN:

Virtual Private Network

References

  1. Convert B (2005) Europe and the crisis in scientic vocations. Eur J Educ 40(4):361–366

    Article  Google Scholar 

  2. Henriksen EK, Dillon J, Ryder J (eds) (2015) Understanding student participation and choice in science and technology education. Springer, Dordrecht, p 412

    Google Scholar 

  3. Gershenfeld N (2012) How to make almost anything: the digital fabrication revolution. Foreign Affairs, 91(6):43–57

  4. Blikstein P (2013) Digital fabrication and making in education: the democratization of invention. In: Walter-Hermann J, Büching C (eds) Fab labs: of machine, makers and inventors. Bielefeld, Transcript Publishers, pp 203–222

    Google Scholar 

  5. Martin T, Brasiel S, Graham D, Smith S, Gurko K, Fields DA (2014) Fab lab professional development: changes in teacher and student STEM content knowledge. Digital Fabrication in Education Conference, FabLearn, Stanford

    Google Scholar 

  6. Gul LF, Simisic L (2014) Integration of digital fabrication in architectural curricula. Digital Fabrication in Education Conference, FabLearn Europe, Aarhus

    Google Scholar 

  7. Tesconi S, Arias L (2014) MAKING as a tool to competence-based school programming. Digital Fabrication in Education Conference, FabLearn Europe, Aarhus

    Google Scholar 

  8. Padeld N, Haldrup M, Hobye M (2014) Empowering academia through modern fabrication practices. Digital Fabrication in Education Conference, FabLearn Europe, Aarhus

    Google Scholar 

  9. Cornetta G, Touhafi A, Mateos FJ, Muntean G-M (2018) A cloud-based architecture for remote access to digital fabrication services for education. IEEE International Con- ference on Cloud Computing Technologies and Applications, Cloudtech, Brussels

  10. G. Cornetta, and F. J. Mateos. “Fab lab modules: cloud hub.” On- line documentation available at https://github.com/gcornetta/cloudhubAPI#documentation-and-developer-support. (2019)

    Google Scholar 

  11. Cornetta G, Mateos FJ (2019) Fab lab modules: fab lab wrapper (pi-gateway) APIs On-line documentation available at https://github.com/gcornetta/gwWrapper#fablab-apis

    Google Scholar 

  12. Cornetta G, Mateos FJ (2019) Fab lab modules: machine wrapper Online documentation available at https://github.com/gcornetta/piwrapper#machine-apis

    Google Scholar 

  13. Calheiros RN, Ranjan R, Beloglazov A, De Rose CAF, Buyya R (2010) CloudSim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms. Software Practice and Experience 41:23–50 Wiley Online Library. https://doi.org/10.1002/spe.995

    Article  Google Scholar 

  14. Downey A (2005) Lognormal and Pareto distributions in the internet. Comput Commun 28:790–801. https://doi.org/10.1016/j.comcom.2004.11.001

    Article  Google Scholar 

Download references

Declarations

The authors declare that they have no conflict of interest.

Funding

The work described in this paper is part of the NEWTON project, which has been funded by the European Union under the Horizon 2020 Research and Innovation Programme with Grant Agreement no. 688503.

Author information

Authors and Affiliations

Authors

Contributions

GC is the main author of this research paper, as well as the software architect and main programmer of the NEWTON Fab Lab platform. FJM has con- tributed to the software development of the NEWTON Fab Lab platform and has developed the cloud simulator to estimate the platform performance. AT and GMM supervised and reviewed the associated experiments, contributed to the literature review and general organization of the paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Gianluca Cornetta.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cornetta, G., Mateos, J., Touhafi, A. et al. Design, simulation and testing of a cloud platform for sharing digital fabrication resources for education. J Cloud Comp 8, 12 (2019). https://doi.org/10.1186/s13677-019-0135-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13677-019-0135-x

Keywords