Beginning 500-Word Summary Project, Week 1

According to the syllabus, the 500-Word Summary project involves the following:

Individual: 500-Word Summary, 10%

Individually, you will write a 500-word summary of a technical or scientific article that demonstrates: 1. ability to identify key processes and concepts in a professional science or technology article. 2. ability to describe complex processes and concepts clearly and concisely. 3. an awareness of audience. The summary should cite the article and any quotes following APA format.

Perform the following steps to begin your project:

First, use the library’s journal databases (navigate to Start Your Research > Find Articles > A/Academic Search Complete or I/IEEE Explore) to find an article of sufficient length (< 4 pages) that focuses on a topic from your major and career. Save the PDF of the article some place safe so that you can easily return to it later.

Second, read the article from start to finish.

Third, write a reverse outline of the article by reading each paragraph again, putting the article away, typing one sentence in your own words summarizing that single paragraph, reading the next paragraph, writing one sentence summarizing it, etc. until the end of the article. Save this reverse outline someplace safe (we will be using it next week), and copy-and-paste it into a comment made to this post (click the title above, “Beginnings 500-Word Summary Project, Week 1” and scroll down to the comment box where you paste your reverse outline and then click “post comment.”).

14 thoughts on “Beginning 500-Word Summary Project, Week 1”

  1. I. Introduction
    1. Apps have become more popular ever since Apple released their very first iPhone
    2. There are a variety of things developers should consider when developing an app
    3. These considerations must be approached in a certain manner
    4. App developers start projects without having a blueprint or design ideas beforehand
    5. Wasserman speaks about flaws of mobile software engineers
    6. Wasserman continues to speak on the issues of mobile software engineers
    II. Related Works
    1. Mobile-D was an approachable method to tackling phones prior to the iPhone
    2. CBL can be an approachable method to tackle problems in the mobile software engineer developers
    3. Hybrid Methodology is an approachable method that can be used to help develop/design applications
    4. The methods that were proposed have restrictions therefore it is not a complete solution
    III. Research Method
    1. A survey known as the Systematic Mapping study was introduced to identify existing knowledge amongst developers
    2. The survey demonstrated that there is no appropriate method that tackled the problem
    3. A solution was provided that was known as the ‘Integrated Framework for Mobile Application Development’ (IFMAD)
    4. The introduction of SWEBOK and how it relates to IFMAD
    5. The framework was introduced to a few students which led to a good evaluation but there were too many activities
    6. The framework was later modified to have an accurate number of activities for students
    IV. Integrated Framework For Mobile Application Development (IFMAD)
    1. A figure is shown to demonstrate how everything looks
    A. Development Activities Catalogue
    a. In order to structure a catalogue, previous evaluations and surveys were used
    B. Agile Core
    a. The development and delivery of an app commences through different stages
    i. Definition of Product Concept
    1. This is the stage where if the app addresses the needs of customers
    ii. Agile Process
    1. The app should be manifesting the product concept
    iii. Release Delivery
    1. The last stage where the app should be ready to be launched on various platforms
    2. Future updates are released to meet evolving customer needs
    C. Mobile Ilities
    a. The most diverse attributes from previous evaluations/studies were taken into consideration to address common concerns within the development of an app and grouped them together to be now known as Mobile Ilities
    b. The development team must choose the most relevant mobile ilities since there is many
    V. Framework Use
    1. The agile process depends on the development team
    2. A case study was done on 3 computer science Bachelor students to see what they would do
    A. Definition of Product Concept
    a. This stage is where a team decides how the app will be designed and on what platform
    b. The development team must be wary and take into consideration the problems that may arise therefore they will have to choose the most relevant ilities that have to be addressed hence why a backlog was produced
    B. Agile Core
    a. The stage where the team has chosen a process to work with and have begun implementing it
    b. Everyone has their own task to be carried out
    c. The team must fill out a table that demonstrates what they have done so far
    d. More than one mobile ility can be considered from the development team
    C. Release Delivery
    a. Once app is developed, the team must create an APK file to upload it to Google Play
    b. IFMAD makes the process of meeting googles requirements easier
    c. The feasibility of the method has been confirmed
    VI. Evaluation of Results
    1. The evaluation results helped with the understanding of the framework
    A. Selected Tasks
    a. A table is shown demonstrating how the team tackled their app and how they did not select all their activities nor tasks
    B. Selected Mobile Ilities By Activity And Task
    a. Most relevant mobile ilities were selected in relation to the app
    C. Mobile Ilities Applied
    a. The evaluation was focused on asking specific questions in relation to the app
    b. A focus group was sent to question the development team as to why certain mobile ilities caused difficulties in progressing the development of the app
    c. Flexibility is the mobile ility that tends to cause most problems due to misunderstandings
    d. While energy can be easily applied, the amount of energy is uncertain because the app may either drain battery life drastically or it may not which is a problem that exists
    e. Device Heterogeneity can be easily understood but the issue is when developers must face the fact that there are multiple android devices in the market therefore can get overwhelming if the app is not optimized as it should be
    f. Data security is not much of a concern due to the use of Facebooks API which is in charge of the security aspect
    D. Impact of Mobile Ilities In Process Development
    a. Taking into consideration of Mobile Ilities is confirmed to be very useful in developing a good application by a team
    b. Development team proceeds to express and explain how useful taking mobile ilities into account is
    c. Development team understands not every mobile ility can be addressed but at least the most relevant to the app can be
    d. In another development team, mobile ility was not taking into consideration therefore there were multiple issues/flaws with the app including the user interface
    VII. Discussion
    1. The explanation of a water cycle mindset
    2. 3 out of 6 mobile ilities that were selected were easy to implement due to everyone understanding them
    3. Data security can be hard to implement but using third party solutions such as Facebook and/or firebase can make it easier
    4. Energy consumption and app optimization can become an issue due to a vast number of android devices in the market which release OS updates at different times
    5. Flexibility is not much of a big deal because it is generic concept, but it can turn into a big deal depending on the context
    6. Everyone agrees that not every mobile ility needs to be addressed but the most relevant ones are sufficient enough which takes place in the IFMAD stage
    7. Overall, the framework made a positive impact towards the app and the development team by having less errors than you would have if not given a framework or mobile ilities to work with
    8. The entire thing was a test for a university and may not apply to an actual developing team that has more experience in the real world
    VIII. Conclusion and Future Work
    1. A framework (Mobile Ilities) was presented to address problems to a developing team who had previous knowledge of a process known as Scrum which led them to take certain problems into consideration
    2. The framework can help guide developer teams addressing important problems of an ap but also train new incoming developers
    3. Mobile ilities will be modified for future work because not every application is the same nor does every team have the same knowledge
    4. Characterization of the different mobile ilities will be able to provide more guidance

  2. Introduction:
    1. According to Krizinger, the technology trend has moved towards mobile devices like smartphones and tablets
    2. Krizinger describes the situation in Africa that have seen an increase in technology as he described them as “Home Users” and which they aren’t aware of the risk involved with access the internet
    3. He describes Home Users” as high-value targets for cybercrime
    4. With having home computers he considers them to be considered as “weaker links” rather than corporate use computers because of the lack of certain security protocols
    5. In order for their security, we need to create computers and networks that are “out of the box” says Krizinger
    6. An example of a cybersecurity breakdown is Africa/ “Millions of Africans are using mobile phones to pay bills, move around cash and buy basic everyday items… Africa has the fastest growing mobile phone arment in the world”.

    Paragraph 1:
    1. Citizens in Africa have skipped a generation of technology which results in them not having the knowledge to become aware of the risk that comes with technology.
    2. “Home users” are only aware of technical issues such as connection to the internet
    3. The article goes into depth about a three-step approach that can help users protect themselves from threats
    4. The article lists three steps that users can take to progress their security which was “thick, intermediate, and thin”
    5. The main point of this article is to bring to light how home user security can affect them with security protocols in place
    6. Regular house computers security is at the hands of their user which Krizinger describes them as “thick security oriented users”.
    7. Kritzinger states the prone problems that these users are exposed to which are forget to download patches/updates, do not set up security settings correctly, do not keep up to date with new security risk, allow software licenses to expire, incorrect security protection, lack of cybersecurity awareness, weak passwords, and do not update their anti-virus program regularly.

    Paragraph 2:
    1. Lack of cybersecurity by home users can also create problems for their government
    2. “Their computers can be used as platforms to launch an attack on a country’s critical information structures, a situation that could prove strategically damaging to any country”.
    3. So to prevent this and maintain a well-balanced cybersecurity system for home users would be to dedicate the responsibility to third-party companies that will create a secure connection between the home user and the internet.

    Paragraph 3:
    1. Initiatives are being used to provide guidelines for services to assist in cybersecurity.
    2. Initiatives such as using the Australian ISP to help create more security for users that will benefit them however for more extensive security coverage like malware, virus identifications, and breaches the ISPs will require the ISP to take more responsibility
    3. Another step that uses could take to improve their security and connection is learning and implementing strategies that the ISP will use to help protect them.
    4. Such devices like updated anti-virus software, updating new software patches, scanning computers for viruses frequently, and stopping spam.
    5. The user must always keep their knowledge of cybersecurity to ensure the most out of their security programs and protocols.
    ISPs can also assist in cybersecurity by referring users to portals that can help increase knowledge of how to use these security programs.

    Conclusion:
    1. Although ISPs can help increase cybersecurity in your home it will all depend on the user and how they go on with the information they receive
    2. Simon Hackett (manager of Adelaide ISP) states “ISPs are not the gatekeepers and are not in a position legally or ethically to make decisions for users.
    3. This means that even though the ISPs can help influence decisions upon user they are not legally available to take the full decisions for them
    4. ISPs are not able to control your desire to protect your internet activities and how much users are willing to pay for security software.
    5. The article brings together that can help improve cybersecurity for uses as they referred it as “intermediate security-oriented home user”
    That will help users to protect their connection to the internet.

  3. Smart Factory of Industry 4.0: Key Technologies, Application Case, and Challenges

    Discussion:
    1. Mainly focus on big data, cloud computing, cyber-physical systems, industrial Internet of Things.
    INTRODUCTION:
    2. To Upgrade the manufacturing industry is the combination of advanced physical architecture and, cyber technologies.
    3. Industry technologies are constructed with three layers include physical resources layer, network layer, and data application layer.
    4. The researcher Chen et al. are examining those issues scientifically and try to find supplementary solutions with references.
    5. Industry needs advanced manufacturing systems with big data warehouses and cloud-based computing.
    SMART FACTORY ARCHITECTURE:
    6. Several studies (Benkamoun et al, 2014; Radziwon et al, 2014; Lin et al., 2016) found that to build a smart factory, manufacturing enterprises need to be more advance in the production and marketing section.
    7. It signifies a dive advancing from more outdated automation to a completely connected and flexible system.
    8. Chen et al. suggest that there are still many technical problems that need to solve to build a smart factory. Such as from the physical resources layer, the Modular Manufacturing Unit should be a self-reconfigurable robotic system with a configurable controller system. Which will have the auto managing ability to take the action like extend, replace, and so on.
    PHYSICAL RESOURCES LAYER:
    9. Morales-Velazquez et al. developed a new multi-agent distributed control system to meet the requirements of intelligent reconfigurable Computer Numerical Control (CNC).
    10. Data acquisition is including data analysis, reporting, network connectivity, and a remote-control monitoring system. For using data acquisition, the most common wireless sensor network is RFID, ZigBee, and Bluetooth.
    11. Zhong et al. proposed an RFID-enabled real-time manufacturing execution system. According to researcher Zhong et al., this system is capable to make decisions and guarantee responses within specified time constraints.
    NETWORK LAYER:
    12. The writer proposes to have a standard OPC UA-based interaction in multi-agent systems. With this system, multiple transport layers and a sophisticated information model allow the smallest dedicated controller to freely interact with complex, high-end server applications with real-time communication.
    DATA APPLICATION LAYER:
    13. As Wan et al perceptively state, a manufacturing big data solution for active preventive maintenance in a manufacturing environment, which combines a real-time active maintenance mechanism with an off-line prediction method.

    ISSUES AND CHALLENGES:
    14. Despite all of this, researcher Chen et al. drew attention to the fact that there are still some difficulties to build a smart factory. Like to have a self-reconfigurable robotic system, equipment must be smart manufacturing and the industrial internet of things should be progressive.
    15. For knowledge-based intelligent manufacturing, the manufacturing entity should be intelligent enough to provide data collection, data fusion, and extraction of manufacturing resource characteristics.
    CONCLUSIONS:
    16. This article discusses the latest of 4 distinct industrial revolutions that the world has or is currently experiencing.
    17. Writer projected key technologies, which showed that the OEE ratio is improved.

  4. Towards the Detection of UX Smells: The Support of Visualizations

    I. Introduction and Motivation
    1. User satisfaction with software products is largely influenced by UX attributes.
    2. Poorly designed usability can make it harder for users to navigate an interface.
    3. HCD (Human-Centred Design) model is used to evaluate systems.
    4. Many methods can be used to evaluate systems but are rarely implemented by developers for a variety of reasons.
    5. Research done on e-government websites has shown that developers need to be provided with the tools and methods to effectively evaluate and implement usability.
    6. ‘Usability smells’ and ‘Code smells’ respectively indicate weaknesses in the design of an interface or code that can cause problems in the future.
    7. Researchers present a new approach to identifying usability smells.
    8. The results reported by the study provide usability evaluators with the tools they need to detect usability smells.
    II. Related Work
    A. Tools for Usability Testing
    1. There are two methods used for evaluating usability; User-based methods and analytical methods.
    2. There are several tools available to assist in the different stages of usability testing.
    3. There are tools available that provide visual representations to help understand user behavior.
    4. Heatmap visualizations are used to highlight the areas users largely interact with.
    5. Researchers provide several examples of tools that are used to help evaluators identify usability issues but note that they do not allow the evaluator to analyze user behavior in depth.
    6. Clickstream analysis tools are used to analyze a big amount of data and users and provide complex visualizations but the research presented is limited to a smaller group of users.
    B. Usability of E-Government Sites
    1. e-Government makes use of ICT (Information and Communication Technologies to provide citizens and business’ with access to information and services.
    2. DESI (Digital Economy and Society Index) is an index used to measure the progress of digital performance and competitiveness in the EU.
    3. Research shows that more than half of the population lacks basic computer skills and websites with low usability make it harder for these users to navigate webpages.
    4. Italy created a Working Group on Usability, which created a protocol that can be used as a guide to performing usability tests.
    5. eGLU Box is a web-based platform that was developed to help design and run usability tests.
    6. Researchers explain how these tools provide valuable support but still require experience and resource.
    III. Identification of Usability Smells
    1. Researchers describe the four techniques used to identify usability smells in website navigation.
    2. Graph-based structures are commonly used to visualize website navigation or general navigation paths.
    3. Details the visual encoding used in visualizations.
    4. Explains how the nodes are used to reveal usability smells.
    5. Introduction to a usability test that was executed in March 2019 to evaluate the Italian Navy Website.
    6. Goes on to describe the steps the user would take to navigate to a certain webpage.
    7. Reflects on how in order to correctly accomplish the task, the user should have visited 4 pages and 3 links but since the website has since been updated, the steps taken to accomplish the task may not be the same.
    8. Scalability is not an issue when performing these tests because many of these tests involve a low number of participants executing simple tasks.
    A. Arc Diagram Visualization
    1. Explains the structure of the first visualization based on Arc Diagram.
    2. Explains how the numbers used to represent the participants are used in the nodes.
    3. Explains how node labels are used to improve visualization readability.
    B. Page Tree Visualization
    1. Explains the structure of the second visualization based on Page Tree.
    2. Explains the transition of each node.
    3. Remarks on the positive aspect of this visualization.
    C. Sankey Diagram Visualization
    1. Explains the structure of the third visualization based on the Sankey Diagram.
    2. Continues to explain the structure of the Sankey Diagram.
    D. Node-Link Visualization
    1. Explains the structure of the last visualization based on the Node-Link visualization.
    2. There are well documented problems with Node-Link visualization in terms of visual complexity and readability.
    3. There are mechanisms that allow the user to change the visualization.
    4. Using these mechanisms can make analysis slower and more tiring.
    IV. Experimental Study
    1. Introduction of a study done of complex visualization techniques that follow the HCD approach.
    2. Presents the questions the study is designed to answer.
    A. Participants and Study Design
    1. Explains who participants are and what design was selected for the study.
    B. The Experimental Tasks
    1. Explains the role the participants would be taking in the study.
    2. Explains how the tasks were to be performed.
    3. Describes the tasks.
    4. Total number of trials was 300.
    C. Procedure
    1. Describes where the study takes place, what is involved, who is involved, what the reward for participating in the study was and what information they collected from the participants.
    2. Participants were given a booklet composed of the four visualization techniques and tasks to be completed with each technique.
    3. The facilitator introduces and explains the first visualization technique and the participant begins performing the tasks.
    4. In order to check the overall research methodology, the procedure has been assessed by a pilot study.
    D. Data Collection and Analysis
    1. Data is collected and analyzed by researchers using different methods.
    2. Researchers created an excel file for each task performed in order to evaluate the support provided by the visualizations.
    3. Two well known questionnaires were used to evaluate satisfaction with each visualization technique.
    4. Repeated measures are used to assess the significant differences in the four visualization techniques.
    E. Results and Discussion
    1. The data collected shows how well the visualization techniques provide support to the evaluators in identifying usability smells.
    2. The first task showed how some visualizations techniques were more efficient in identifying usability smells than others.
    3. The Sankey Diagram and Node-Link made it easier for evaluators to interpret the data whereas the Page Tree did not.
    4. Despite there being some confusion about the paths certain visualization techniques provided, evaluators were able to detect the paths that led to task failure.
    5. The second task demonstrated how some visualization techniques could generate errors in path identification.
    6. The third task identified how many users were confused by the webpage and took alternative paths and ultimately did not complete their task.
    7. The Arc Diagram and Sankey Diagram performed better than the other visualization techniques in the third task.
    8. There was a difference in time for the task performed in the fourth task for each of the visualization techniques.
    9. The Arc Diagram allowed the participants to accomplish their tasks quickly.
    10. The Sankey Diagram was the best visualization technique in the fifth task in identifying the web page with the most problems.
    11. There were no differences between the four visualization techniques in terms of evaluator satisfaction.
    12. In conclusion, none of the visualization techniques proved better than any other as they all were able to identify usability smells.
    V. Conclusion
    1. Through this article and the study reported, we were able to see how the four visualization techniques are used by evaluators to identify usability smells.
    2. The study demonstrated how through the use of visualization techniques, evaluators/researchers were able to identify and modify certain elements to improve usability.
    3. Further testing needs to be done with a larger number of evaluators with different levels of expertise in order to overcome the limitations of the first experimental study.
    4. The goal of the research done in this article is to provide resources and techniques for UX evaluation that can become common industry practices.

  5. COMPUTER NETWORK SIMULATION AND NETWORK SECURITY AUDITING IN A SPATIAL CONTEXT OF AN ORGANIZATION

    Introduction
    1. Competition in the current world’s business organizations has been a helping factor to push many of them to advancement. Businesses operate under the micro and macro environment.
    2. The two environment types act as forces toward the scope of changes within the organization.
    3. Zaliwski (2005) wants to communicate to such organizations concerning the Computer Network Simulation (CNS) and Network Security Auditing (NSA) that would follow the spatial pattern.
    4. In the article, Zaliwski (2005) informs that the micro issues require immediate attention from the management system.
    5. The disruption of the business organization operations by macro-threats such as the government and the competitors is under check by the law and customs.
    6. However, the micro-level of threats involves those who are not satisfied by the laws and rules that govern the procedures.
    7. An example is the computer network threats.
    8. Therefore, the suggestion would be to have professionals who have the skills to manage the computer networks that, from the nature of the current system, must wipe out its complexity of the security-related software and easing of the security auditing methodologies.
    9. Zaliwski (2005) reports that the complexity of the system and the hardness of the methodologies make it uneasy for the staff to apply it.
    10. The proposal does not neglect that the security model must align to the policies and the procedures of the organization, and working hand in hand with the organizational structure (Zaliwski, 2005).
    11. Although security systems are critical to the organization’s operations, they need to be simple for usability and easy to interact with.
    12. Besides, it needs to be cheap and involving an effective and less expensive laboratory.
    13. The use of the laboratory, in this case, is research and teaching sessions for the advancement of security system related to the computer network.

    Related Solutions
    1. As per Zaliwski (2005), the possible method to arrive at the goal is to create a virtual computer network in a physical lab.
    2. That would mean a shortening of the physical computer chains that would have added expense to the system.
    3. The system would work with open source, commercial and rare solutions.
    4. Also, the system would require graphical network visualization.
    5. It would help the professionals to understand the data connections (Zaliwski, 2005).
    6. Besides, software for network design and administration and the management part would be necessary for the system to be effective.

    The Proposed Solutions
    1. There is no other system that would work better except the one that involves three sub-systems.
    2.They include the spatial models, the repositories, and the virtual networks (Zaliwski, 2005).
    3. The entire system would require three computers where one would serve as the host for all virtual machines.
    4. The User Mode Linux is the creator and maintenance operator of the computer.
    5. The second would connect to the virtual world, while the third would design and keep data for auditing purposes.
    6. The system that Zaliwski (2005) describes is a lightweight one.
    7. It is simple for the professional to use.
    8. Also, the system is cheap and affordable from the micro-business firms and the teaching departments.
    9. The auditing methodologies would be simple, unlike the existing systems that keep the professionals scratching their heads.
    10. Therefore, the solution is to move the network lab from physical to virtual.

  6. Basic electronic computers perform their task sufficiently, but demand for faster computing power has risen. Computers that can prevent bottlenecks and perform current tasks in a fraction of the time are desired so the idea of a computer that doesn’t binary (1s and 0s) was introduced, the ternary optical computer (it uses three digits instead of two).

    Various achievements were made possible with the theory and construction of ternary optical computers. During the planning stages the architecture was modified with complex designs like carry-free addition and vector-matrix multiplication. These modifications made this theory come true and now ternary computers exist.

    Although how this computer will operate has been determined, one problem still existed. The issue is how will the service perform. What needs to be evaluated is the duration to perform any given task. This is called quality of service, the time it takes to operation any request given by a customer until it is completed.

    The main responsibility of this report to outline the construction and the results of the ternary optical computer. The results were predicted by mathematical equations and the computer outperformed those predictions.

    The rest of this report will explain the build model of the computer and give more detail of how this system works. This includes the algorithms the system performs, how it does tasks and the process it takes performing them and the conclusion of this work and potential future decisions.

    There is a diagram of how the hardware components of this computer looks. Production of this started at the beginning of 2017 and the architect looks similar to cloud computers. This is broken down to three catalogs which are infrastructure of a service, platform as a service, and software as a service. The roles of these three are as follows: IaaS controls the servers, controller and reconfigurable processors and network resources. PaaS is task management, database, and development tool. SaaS software model that can obtain application programs.

    Using queueing theory, which is used to measure the performance of tasks of cloud computing. It is important to use to measure the resource allocation of the ternary computer. In various locations there were different methods of using queueing theory to measure their devices, but these were all used to provide a foundation for the construction of this new ternary computer. This means this device can organize a concise framework analyzing, simulating the process of an optical computer and therefore it was a breakthrough for computing power.

  7. Rational function distribution in computer system architectures: key to stable and secure platforms
    I. INTRODUCTION
    1. Computer systems suffer a lack of rational function distribution in the many levels of hardware and software.
    2. Rational function distribution means that functions leading up to minimizing goals are important software elements.
    3. The issue is the combined hardware and software products of the industry have not been treated with the proper elements to perform its task of creating stable connections.
    4. The model you see in this article is used for showing the effects and costs of certain levels involving hardware and software.
    II. MODEL OF FUNCTION DISTRIBUTION
    1. Each level contains different materials, and each level inside the picture uses the level as a tool for more complicated projects.
    2. Each level contains its own complexity due to the process of mapping.
    3. As you go up each level, the number of people becoming active in the level increases.
    4. As the levels increase, the cost of complexity increases.
    5. Since complexities are passed upward it has caused problems for unreliable and insecure platforms.
    III. PRINCIPLES THAT LEAD TO COMPLEXITIES
    1. The first principle involves giving the problem to someone else who can solve it for you.
    2. The second principle is giving the user all possibilities of what to do.
    3. The third principle is using a tool that can be adapted to perform a function.
    4. The fourth principle allows whatever design mistake is made, see if it can fit the needs of what has to be done.
    5. Determine if the software is useful or not.
    6. If the software becomes a mess then create a software that acts as a bridge between an operating system and application on a network.
    IV. BUSYWARE AND STABLEWARE
    1. The use of patches have then become useful for fixing bugs instead of using a large workforce to fix it.
    2. With the effort of stable and secure platforms there will be no need to large groups of people to fix complexities of the platform.
    V. SYSTEM SOFTWARE-INSTRUCTION SET RELATIONSHIP
    1. If there is one thing that is important it is the interface between software systems, so two approaches are created
    2. The case of IBM System/360 turned out to contain a lot of problems regarding its complexity in decision making.
    3. Due to the overwhelming problems that had occurred, customers would not have a chance to master it in their own environment.
    4. The case of Burroughs involved multiple highly advanced products without realizing the cost and reliability needed.
    5. Had there been a more strategic plan about releasing the product, technology could have been different today.
    VI. WHERE ARE THE COMPUTER ARCHITECTS?
    1. The large advancements of technology in the mid 1970s ensured that the hardware- software products that can serve good functions did not survive.
    2. The focus was placed on the performance of processors.
    3. The compatibility cost must be made to match that safety standards, so this is a time for new architectures for computer systems to arrive.
    4. Education must be applied regarding system based knowledge to computer system architects who have worked for a lot of computer systems.
    VII. SYSTEM ENGINEERING OF THE COMPUTER INDUSTRY
    1. Here is the role and responsibilities of a computer architect.
    2. The person must find mappings of each level and distribute functions for goals.
    3. To use a structure one must be creative and it must be central to any designers.
    4. People could just easily solve it with a solution, but there is no solution that can lead to improvements.
    VIII. REBIRTH OF COMPUTER INDUSTRY
    1. In a field like this it is important to think about scenarios that could happen.
    2. The new dominant actor reduces the complexity of stableware platforms.
    3. There is potential for some countries to reach broad solutions regarding stable platforms.
    4. An example is Russian for its use of simple technology.
    5. The dominant customer scenario has people produce a kind of trustworthy platform.
    6. This can create potential for some catastrophe in certain areas for the business.
    7. The rebirth is the best scenario for its increase in products to fight off against other competitors.
    8. Of course the amount of effort put inside the instruction sets of such hardware must be made.
    9. The amount of competition put into these computers can help advance software.
    IX. AUTONOMIC COMPUTING
    1. Transforming the computer industry into stableware is an amazing long term goal, however today’s computer system is much needed.
    2. The vice president of research, Paul Horn, made a new field for the computer industry.
    3. This Field would require a machine that can perform at its best so users do not have to concern themselves with small details.
    4. Creating that kind of system can be quite challenging for anyone to master its complexity.
    5. Rational function distribution with autonomic computing can help contain complexities today.
    X. CONCLUSION
    1. Large amounts of code are needed in order to achieve certain functions for the software.
    2. Computer system architects must be given with the proper knowledge to ensure secure and stable platforms.
    3. Stableware could happen in the future, but the risk to accomplish it could prove to be fetal.

  8. 1. Introduction
    a. Covid-19 is declared a pandemic by WHO on March 11th which resulted in lockdowns and as such many were either laid off or told to work from home.
    b. Working from home during this pandemic was a rushed process so many people do not have an isolated place to work which resulted in distractions and other things to take care of.
    c. No studies of working from home with pandemics this big.
    d. Question: “How is working from home during the COVID-19 pandemic affecting software developers’ emotional wellbeing and productivity?”
    2. Background
    a. Pandemics, Bioevents and Disasters
    i. Pandemics are very stressful to everyone
    ii. Many ways to reduce the effect of the pandemic
    iii. Efforts to lessen pandemic effects vary greatly from person to person
    iv. Less likely to comply with anti-pandemic efforts when basic needs are at risk
    v. Employers can relieve workers if they can ensure that they will not lose their jobs
    vi. Businesses struggle and need new strategies to keep running
    vii. Other Covid studies show there are effects on productivity
    b. Working from Home
    i. Remote working using internet allows workers to continue working from home even after the pandemic.
    ii. Not ideal for everyone as not everyone has a dedicated workspace at home.
    iii. Some say that working from home has increase in productivity but work that has once been separate from home is now part of it.
    iv. Pandemic has different effects on a person’s emotional stability
    v. Productivity from remote working may rely on self reports which can be bias
    c. Productivity and Wellbeing
    i. Productivity is affected by many factors
    ii. Productivity as a measure for software engineers is difficult
    iii. Using number of commits or modified lines of code to measure productivity is not accurate
    iv. Some companies use these questionable measurements for software engineers
    v. No consensus on how to measure productivity of a software engineer
    vi. Software developers wellbeing is closely related to job satisfaction
    3. Hypotheses
    a. Developers will have lower wellbeing while working from home due to COVID-19
    b. Developers will have lower perceived productivity while working from home due to COVID-19
    c. Change in wellbeing and change in perceived productivity are directly related
    d. Disaster preparedness is directly related to change in well-being and change in perceived productivity
    e. Fear (of the pandemic) is inversely related to change in wellbeing and change in perceived productivity
    f. Home office ergonomics is directly related to change in wellbeing and change in perceived productivity
    g. Disaster preparedness is inversely related to fear (of the pan-demic)
    4. Method
    a. Questionnaire that was approved and translated was sent out for data collection
    b. Population and Inclusion Criteria
    i. Target of study are software developers who went from office work to home work.
    ii. Questionnaire open to all software developers or similar
    c. Instrument Design
    i. Fully anonymous questionnaire no contact information collected
    ii. Questions are made into blocks to scale or question type and were not randomized
    iii. Questionnaire had a filter question to exclude surveyees whom did not meet requirement
    iv. Emotional Wellbeing questions using WHO-5
    v. Perceived Productivity questions HPQ
    vi. Disaster Preparedness questions DP
    vii. Fear and Resilience question FR
    viii. Ergonomics
    ix. Organizational Support OS
    d. Pilot
    i. Acquired feedback from colleagues and change survey to be better
    e. Sampling, Localization, and Incentives
    i. Survey advertised on many websites
    ii. Other ways to send survey was considered
    iii. Instead of a global campaign it was a collection of smaller campaigns
    iv. Localization had some languages use a different website
    v. No cash incentive but offered to donate money to open source projects
    5. Analysis and Result
    a. 2668 responses with only 2225 meeting criteria
    b. Data was cleaned no empty fields no time stamps added field of translated responses
    c. Validity Analysis
    d. Demographics 53 countries
    e. Change in Wellbeing and productivity
    i. Developers will have lower wellbeing while working from home due toCOVID-19 (Supported)
    ii. Developers will have lower perceived productivity while workingfrom home due to COVID-19 (Supported)
    f. Structural Equation Model for rest of hypotheses
    i. Rest of Hypotheses were not supported
    g. Exploratory Finding Sem results has interesting partterns
    h. Summary
    i. Software developers working from home are showing less productivity and wellbeing
    6. Discussion
    a. Recommendation
    i. Normal productivity during pandemic is not realistic to expect
    ii. Employees should accept that they can’t output as much work
    b. Limitations and threats to validity
    c. Implications for Researcher and future work
    i. This paper opens new research area for working from home during crises
    ii. Does not investigate normal software developing practices like pairs or mutation’
    d. Lessons
    i. Working with international teams for a multi language survey can generate large samples
    ii. Google form is blocked in some countries
    7. Conclusion
    a. Covid is creating a toll on businesses and orgs
    8. Data is viewable on a website
    a. https://zenodo.org/record/3783511

  9. I. Introduction
    1. Cryptocurrencies are a relatively recent domain that became active in the last decade.
    2. Artificial Intelligence techniques can learn to analyze and discover patterns for secure trading and mining.
    3. Big Data and cryptocurrency focus on “security and privacy enhancement” and “prediction and analysis”.
    II. Background
    1. Bitcoin is the cryptocurrency with the highest market capitalization.
    2. Aiming to replace the centralized financial system, bitcoin depends on a decentralized peer-to-peer network.
    3. Bitcoin uses “elliptic curve digital signature algorithm” to process a transaction from the sender and the receiver’s Bitcoin addresses.
    4. Mining pools use very high processing power and most use application-specific integrated circuits (ASICs) which are special hardware circuits designed for Bitcoin mining.
    5. Alternative cryptocurrencies offer greater speeds and other advantages over Bitcoin, some examples are Namecoin, Ripple, Zcash, Litecoin.
    6. Decentralization allows cryptocurrencies to have immunity to government control and interference.
    7. Miners can use AI techniques to increase their profits and save electricity for environment considerations.
    8. Security specialists use these techniques to analyze security and privacy levels of cryptocurrencies and spot threats or pitfalls.
    III. Artificial Intelligence Research in Cryptocurrencies
    A. AI is used in intelligent trading systems to predict stock market and currency price predictions.
    1. Measurements of public interest in Bitcoin from Twitter feeds, Google Trends and Wikipedia were recorded. Twitter and Google Trends are used to predict prices of Bitcoin and Ethereum.
    2. A deep learning model was used to predict Bitcoin prices and extent of transactions fluctuation of the currency. Deep learning models such as deep neural network (DNN), long-short term memory (LSTM) and artificial neural network (ANN) were all used for price predictions.
    3. Methods that used gradient boosting decision trees performed better for short-term 5/10-day predictions, while LSTM performed better for predictions based off an estimated 50 days of data.
    4. Averaged one-dependence estimators (AODE) is a model that used a probabilistic classifications technique. It is used to predict fluctuation in prices and transactions at different lags.
    B. Volatility is the degree of variation of a trading price series over time. It is caused by the decentralized nature making the prices uncontrollable by organizations and the government.
    C. Trading bots are known as software products or websites that offer “algorithmic trading” which automatically analyze market actions and indicators.
    D. Fraud detection is based on detecting anomalies and suspicious behavior in the transactions and trades history. Types of scams include phishing, hacking, digital theft.
    E. Privacy and anonymity – anonymity is used by criminals to prevent others from knowing their identities while doing crime, and privacy protects data of transactions.
    F. Ideas that revolve around AI techniques can be used in mining processes to solve the crypto-puzzle in PoW, or entirely replace it by using an AI-based consensus mechanism.
    G. Some threat examples to security include attacks on the distributed network, mining process attacks, double spending, and transaction malleability attacks.
    IV. Discussion and Possible Future Research Directions
    A. Social media plays a big part for price predictions, notable media posts are from Twitter, Reddit, Google Trends, and BitcoinTalk forum posts.
    B. Volatility prediction is approached by the GARCH model variant. It is preferred by financial experts because they provide a real world context for price predictions and return predictions.
    C. Online commercial trading bots are able to apply price and volatility prediction techniques.
    D. Fraud detection research showed poor results due to the unavailability of sufficient datasets.
    E. Clustering techniques help regulate authorities to spot suspicious activities like money laundering and other illegal behaviors.
    F. Miners are urged to use data analysis for mining in order to implement newer strategies that predict profit from transaction fees.
    G. AI techniques can help detect attacks or suspicious activities on the network.
    V. Conclusion
    1. This survey is used to help researchers that are interested in the application of AI and machine learning techniques in cryptocurrency.
    2. There are still many AI techniques that can be used to tackle different cryptocurrency challenges.

  10. Security in Social Networking Services: A Value-Focused Thinking Exploration in Understanding Users’ Privacy and Security Concerns.
    1. Introduction
    A. The exponential growth of the Internet and Social Networking Services (SNS) has further exacerbated the challenges of privacy and security. It is argued that this transformation in the environment has provided increased opportunities for security breaches due to increased interconnectivity.
    B. Social media reports suggest that in 2014 Facebook had over one billion monthly active users. Twitter and LinkedIn also have millions of users with 2015 reports of over 250 million monthly active users for both.
    C. The use of social media in developing countries is increasing at a faster rate than in developed countries. The growth in Internet use is also attributed to the increasing use of mobile/smartphones in this region.
    D. The growth in Internet use is also attributed to the increasing use of mobile/smartphones in this region and the growth of mobile social networking. It is reasonable to assert that there is increased usage in developing economies.
    E. The explosive growth of social networks has opened an avenue for criminal and other antisocial behavior. Developing countries are more vulnerable because of the lack of, or weak safeguards and protections. An assessment to enable improved end-user security and privacy in SNS is needed.
    1.1. Research problem and opportunity statement
    A. Today one of the big challenges is to protect the privacy and security of data in networked computer system.
    B. SNS are very easy to use but these systems pose certain privacy concerns and risk to its members. Social media puts our privacy at risk. Cyber-attacks in SNS have increased rapidly and social networks have become the new target for cybercriminals.
    C. The rapid growth in the use of smartphones in developing countries has further complicated this problem. 55,000 pieces of mobile malware are found on a daily basis, and mobile malware incidents have increased by 614% between March 2012 and March 2013.
    D. There are 2 billion mobile subscribers globally and 370 millions of these subscribers are smartphone users. As the number of mobile subscribers and smartphone users continues to increase steadily, the security threat increases.
    E. The issue of trust and privacy in the online social network community remains an area of concern. Social networking sites by their very nature are a repository of personal information. Many users are not aware of the dangers that these online social sites pose. As a result, the Internet makes it easier for information thieves to gather information used to bait and lure targets. It is argued that the increased exposure to security threats from online use is primarily attributed to the lack of knowledge and understanding of the imminent threats. This increased exposure also gives credence to the argument of end users being a major source of vulnerability or the weakest link in systems.
    F. There is a significant growth in the number of cybercrimes being committed in the social media domain. Cyberbullying, cyberstalking, identity theft and social engineering are some of the crimes. Cybercrime is an epidemic and SNS is becoming a prime target for these attacks. There is an urgent need to educate users of these networks of the privacy and security risks that exist in SNS.
    G. Study seeks to determine the users’ values in maximizing their security and privacy experiences in SNS. Study is motivated to utilize a preemptive approach in understanding values and objectives, determining means to achieve them. The VFT technique has been used successfully to solve decision problems in multiple situations.
    2. Literature review
    2.1. Social media and social networks
    A. Social media is a term used to refer to an “umbrella concept that describes social software and social networking” Social media can be described as an “ecosystem” of related elements as it brings together both digital and traditional media.
    B. There are many categories of social media, with one popular classification being by its characteristics. These sites can be oriented toward work-related contexts (e.g. LinkedIn), romantic relationship initiation (the original goal of Friendster.com) or connecting those with shared interests such as music or politics (i.e. MySpace)
    2.2. An overview of the cybersecurity landscape
    A. Research indicated that there is complacency toward cybersecurity, and this has led to increased vulnerabilities in networks. As modern societies and critical infrastructures become dependent on information systems, the Internet has become the medium of choice for communication.
    B. According to Teixeira, Amin, Sandberg, Johansson, and Sastry, concerns are mounting about the safety and reliability of these systems. Addressing this situation is necessary as, if left unattended, it could have devastating effects.
    2.3. Cybersecurity in developing countries
    A. There is an increased use of the Internet and its services in the developing states, yet these regions are behind in preparing for the imminent risks involved in the use of these global networks. Researchers pointed out that in developing countries, there is inadequate cybersecurity controls and a scarcity of the knowledge and skills that are required to develop suitable cybersecurity strategies. In a recent IT4D special issue that looked at cybersecurity, researchers put forward several approaches with the intent of addressing areas particularly relevant to developing economies.
    B. The aim of this is to bring the “missing piece” that will allow institutions to manage this evolving situation in a systematic way, says Choucri and Gercke. It is intended to serve as a benchmark for countries as they improve their response capabilities to cybersecurity management which is capability-centric instead of merely responding to threats, they say.
    C. It could assist developing countries to develop an effective cybersecurity capability that positively impacts socioeconomic development, they argue. To download the full report, please visit the IT4d special issue of the magazine, which includes links to the online special issue and the associated Q&A session.

  11. Energy access and pandemic-resilient livelihoods: The role of solar energy safety nets
    Abstract
    Solar energy safety nets give a lot of social benefits and that is an efficient way to survive during the pandemic.
    1. Introduction
    a. Developing countries hit hardly by COVID- 19
    b. Solar energy safety nets give a chance for developing country to resist to pandemic and increase level of living
    2. Energy access, resilient livelihood and pandemic COVID-19
    2.1. Last mile energy poor
    a. People who live in rural areas (called “last mile”) sometimes do not have access to technology and many services.
    2.2. Pandemic-resilient livelihoods
    a. Access to energy helps poor people to increase their level of education and develops their capacity to prepare to market related or natural risks
    b. Access to electricity provides asses to education, job, and allows people to stay at home and decrease spreading of virus
    c. However, people from the last mile do not have the access to electricity
    d. Access to energy is expensive and requires government subsidies and material assistance
    3. Energy assistance programs
    a. Energy assistance programs make energy available to the poorest groups of people
    b. Expanding grid in rural areas is a good solution for people who live far away and can be achieved by an independent solar home systems which provide energy at the household level
    3.1. Solar energy safety net programs
    a. Many countries have their own programs that allow to extend independent home solar systems
    b. Very often, the national political processes delay process of providing off-grid energy access
    c. Many nations have a federal program that works for poor people and provide them access to the energy
    4. Solar energy safety nets in times of pandemic
    4.1. Initial effects and responses
    a. COVID-19 hit poor people and increases difficulty in paying for energy services
    b. Some countries are taking the following actions to stave off an energy crisis: a 50% cut in the price of solar kits, or help companies operate with renewable energy sources.
    c. Some countries have expanded their other federal programs and adopted over 1,000 social programs, thereby reducing funding for energy programs.
    d. These measures disrupt solar energy service providers, bankrupting them
    e. Also, the functioning of existing systems is disrupted, since proper maintenance is not performed.
    f. Therefore, the last mile households are unlikely to get out of energy poverty
    4.2 Extended policy response for solar energy safety nets
    a. Continuous government funding of energy programs is essential to expand access to energy for the last mile poor and helps them better cope with the impact of the COVID-19 pandemic
    b. A well-designed SESN program makes it possible to get out of the current crisis as it gives employment opportunities for people living in this area and, with an increase in production potential, makes it possible to earn money by selling the energy produced to other people.
    c. Second, these programs open up a spectrum of affordable services for the poor, thereby smoothing out social inequalities.
    5. Concluding remarks
    a. energy poverty affects millions of people in developing countries, limiting their ability to cope with pandemics such as COVID-19
    b. changing priorities in the country’s policy threaten programs supporting the development of solar energy programs
    c. The main challenge for politicians is to keep long-term goals, even in a short-term crisis.
    References

  12. I. Introduction
    1. As time passes more data is used than ever before which can lead up to challenges for data management, storage and processing
    2. In healthcare the volume of data keeps increasing as new technologies are released such as, wearable health devices
    3. Medical equipment needs to collect a lot of data to quickly respond to an emergency
    4. Healthcare devices create different types of data which include text, image, audio and video that may be structured or non-structured
    5. Deep value from healthcare data can be maximized through data fusion of EHR and electronic medical records
    6. Cloud Computing, big data can help organize health care data
    7. Some issues need to be resolved, one issue is that healthcare data that is stored together on the physical later are still logically separated
    8. The biggest challenge of building a comprehensive healthcare system is in the handling of heterogenous healthcare data that is from multiple sources
    II. Health-CPS Architecture
    1. In the healthcare industry cloud and big data are very important and it is becoming a trend in healthcare innovation
    2. Medicine relies a lot in specific data and analysis
    3. The system must support different types of healthcare equipment
    4. Its important to have different data structures to deploy suitable methods for efficient online or offline analysis
    5. The system is expected to provide many applications and services for different roles
    6. Data collection layer collects raw data in different structures and formats to ensure security
    7. Data management layer which includes Distributed file storage(DFS) and distributed parallel computing(DPC)
    8. Application service layer which gives users visual data and analysis results
    III. Data Collection Layer
    1. In the data collection layer, data is collected by the data nodes
    2. Data nodes can be divided into four groups
    3. Research data
    a. Digital data has been a new way for scientific research in identifying side effects of drugs and its new effects
    4. Medical expense data
    a. Using a non traditional healthcare data like medical insurance reimbursement and medical bills are geographically dispersed because it can estimate medical cost.
    5. Clinical data
    a. Clinical data is served in many medical services like EMR and medical imaging, while keeping the privacy of the patients
    6. Individual activity and emotional data
    a. Wearable devices for patients can give access to the individuals emotion data for measuring mental health that can help push recovery for the patients

  13. Introduction
    Hypersonic flow dynamics is very different from high speed flow or low supersonic flow and new methods are still being developed for accelerating the flow to hyper velocities.
    Shortfalls include large sound levels during tunnel running, provisioning of longer run time, capability of providing higher dynamic pressure and high level of enthalpy; and there is no real hypersonic wind tunnel which has all capabilities and which can provide authentic data for full scale vehicles at Hypersonic speeds in present time.
    Hypersonic wind tunnels which are of closed circuit type provide simulation of high Reynolds number flows while these wind tunnels are limited by mass flow rate but can be run for a longer duration of time.
    The ultimate desire of any hypersonic wind tunnel facility is many folds but correct simulation of flow physics with accurate data extraction of force and moments. The objective of this research is to provide the reader a comprehensive knowledge of in use.
    Hypersonic Flow Features and Test Facility Challenges
    Hypersonic flow behaves differently in comparison with subsonic and supersonic flows with design of any ground test facility becomes more complex and the biggest challenge is application of basic fluid flow equations which become ineffective.
    Dissociation of oxygen and nitrogen molecules upon facing high temperatures and their recombination on further increase in temperature is one of the prime problems which designers face during the design stage of any facility and the material requirements and attainment of such high temperatures in the confined space of the test section is difficult to achieve.
    Designers are trying hard to meet all challenges in one facility but the author believes it is.
    Hypersonic Test Facility Categories
    Hypersonic wind tunnels can be either closed loop circuit or open loop circuit thus leading to the four broad categories being defined based on wind tunnel run time.
    Continuous wind tunnels are used for large duration of operation while shock tubes are employed for very short duration to let mach number and Reynolds number simulation be carried out in both continuous and blow down wind tunnel types.
    High temperatures up to 10,000K are achieved in tunnels which run for a short time as few milliseconds.
    Cataloguing of World Renowned Hypersonic Wind Tunnels Facilities
    Among various factors, basic critical information is required before selecting any wind tunnel facility to undertake required tests and take measurements; the prime factors are test section size and shape, balance type and capability, six component strain gauge balances, Mach Number it can achieve.
    Data of 47 in service hypersonic wind tunnels around the world has been accumulated which led to capability of every wind tunnel has been mentioned along with its location and country of location.
    The basic purpose column of the table covers the capability of every tunnel along with any special.
    Country Wise Breakdown of Hypersonic Tunnel
    Most of the research in the field of hypersonic flow is being undertaken by the United States of America, France, and China.
    World’s Achievement in the Hypersonic Ground Facilities
    Critical analysis of 19 hypersonic tunnels available in the USA revealed that all wind tunnel facilities use Air as working fluid while It is observed that no such facility can achieve all properties at one time such as the countries of the United States of America and the United Kingdom.
    Conclusions
    Research described basic construction of different types of hypersonic ground test facilities and study concluded that aerodynamic performance parameters can be measured using continuous type of wind tunnel.
    Recommendations
    Five most advanced countries of the world may join hands together for research and development.

  14. A First Step Toward Network Security Virtualization: From Concept to Prototype
    SECTION I.
    Introduction: Security technologies are becoming more complex as it has to protect several departments with varying network priorities and security protection requirements or policies. Also, a cloud network has its network element that involves many hosts and network devices to provide services to a significant number of dynamic users.
    Additional protection is necessary to minimize the overall cost; it makes network configuration and management even more difficult. Using a firewall is possible to detect DDoS attacks and manage access in a network detection system.
    There is a new concept called Network Security Virtualization (NSV) to prevent network security, which service users use. Following two strategies, NSV can be achieved: (i) transparently monitoring flows to preferred network security providers and (ii) allowing network security response functions on a network computer.
    Software-Defined Networking (SDN) can enable and control a security response feature and flow directly at a network device.
    A network security virtualization model that uses pre-installed, static security devices to provide network users with dynamic service management tools, called NETSECVISOR.
    SECTION II.
    Problem Statement
    Motivating Example: As an example of NSV setup, some essential elements are necessary, such as six routers (R1 – R6), three hosts (H1 – H3), 2 VMs (VM1 and VM2), and a Network Intrusion Detection System. By blocking network packets from each infected host, NETSECVISOR protects corrupted VMs from a network.
    SECTION III.
    Design
    A. Network Security Virtualization Concept: Network security virtualization has two main functions: (i) transparently transmit network flows to desired security devices, and (ii) allow security formulas in network devices when required.
    B. Overall Architecture of NETSECVISOR: The Software-Defined Networking (SDN) is an evolving network technique that allows management network flows and tracks for overall network status efficiently. Five main functions of NETSECVISOR.
    (i) System and policy manager, (ii) Routing rule generator, (iii) Flow rule enforcer, (iv) Response manager, and (v) Data manager.
    C. How to Register Security Devices: A cloud administrator must use a simple script language that requires (i) system ID, (ii) device form, (iii) device position, (iv) device mode, and (v) supported functions to register existing security devices with NETSECVISOR to use them.
    D. How to Create Security Policies: After registration of security devices for a cloud network with NETSECVISOR, it will show the security devices’ details to users using the cloud network.
    E. How to Decide Routing Path: For security requirements, NETSECVISOR should consider the following two factors: (i) network packets should pass through specific security devices, and (ii) The network packet routing paths have to be developed and optimized.
    F. How to Enable a Security Response Function: NETSECVISOR allows for introducing five security response techniques that do not necessitate the installation of physical security equipment or improvements to network configurations for packet handling. There are two modes of operation for these methods: passive mode and in-line mode.
    SECTION IV.
    Implementation: The python code in NETSECVISOR is approximately 1,200 lines long. Accessible hash tables are used to enforce the two data structures in the system and policy manager.
    SECTION V
    Evaluation
    A. Evaluation Environment: To check the adequacy and effectiveness of NETSECVISOR, there are three different network topologies, but two are for a virtual network environment, and another one is a commercial switch environment.
    B. Generation Time and Network Cost Measurement: NETSECVISOR can construct a routing path in 1 millisecond, which translates to 1,000 network flows per second.
    C. CPU and Memory Overhead and Response Time: Each topology’s CPU and memory consumption overhead is also assessed. When NETSECVISOR creates routing routes, it adds overhead.
    D. Discussion on Scalability: A wide cloud network has millions of clients and virtual machines, and each routing path can be generated independently and asynchronously.
    E. Case Study: The prototype is easy to use, and clients can quickly build their own security rules. Also, while using NETSECVISOR, users have more choices for system types, traffic types, and response activities.
    SECTION VI
    Limitation and Discussion: This article focuses on the situation where only a few security devices need security monitoring.
    SECTION VII
    Conclusion: The definition of network security virtualization (NSV) introduced in this paper, which can virtualize security resources and functions and provide security response functions from network devices as needed. To demonstrate the usefulness of NSV, developers implement a new prototype framework called NETSECVISOR.

Leave a Reply