Summary of Shin et al.’s “A First Step Toward Network Security Virtualization: From Concept to Prototype”

TO: Prof. Ellis

FROM: Tarin Sultana

DATE: 03/03/2021

SUBJECT: 500-Word Summary of Article About Network Security

The following is a 500-word summary of a peer-reviewed article about how to secure virtualized network using Network Security Virtualization (NSV). The authors introduce a new method of network security virtualization using NETSECVISOR with the least management cost. According to the authors, “The main goal of this work is to propose a new idea, network security virtualization (NSV), and design a prototype system (with the name of NETSECVISOR) that can enable NSV in cloud-like networks to help all tenants easily use security services.” (Shin et al., 2015). To demonstrate the usefulness of Network Security Virtualization (NSV), network security follows two strategies: (i) transparently monitoring flows to preferred network security providers and (ii) allowing network security response functions on a network computer.  As an example of NSV setup, some essential elements are necessary, such as six routers (R1 – R6), three hosts (H1 – H3), 2 VMs (VM1 and VM2), and a Network Intrusion Detection System. By blocking network packets from each infected host, NETSECVISOR protects corrupted VMs from a network. Network security virtualization has two main functions: (i) transparently transmit network flows to desired security devices, and (ii) allow security formulas in network devices when required. Software-Defined Networking (SDN) is an evolving network technique that allows management network flows and tracks for overall network status efficiently. Five main functions of NETSECVISOR. (i) System and policy manager, (ii) Routing rule generator, (iii) Flow rule enforcer, (iv) Response manager, and (v) Data manager. A cloud administrator must use a simple script language that requires (i) system ID, (ii) device form, (iii) device position, (iv) device mode, and (v) supported functions to register existing security devices with NETSECVISOR to use them. After registering security devices for a cloud network with NETSECVISOR, it will show the security devices’ details to users using the cloud network. For security requirements, NETSECVISOR should consider the following two factors: (i) network packets should pass through specific security devices, and (ii) The network packet routing paths have to be developed and optimized. NETSECVISOR allows for introducing five security response techniques that do not necessitate installing physical security equipment or improvements to network configurations for packet handling. There are two modes of operation for these methods: passive mode and in-line mode. To check the adequacy and effectiveness of NETSECVISOR, there are three different network topologies, but two are for a virtual network environment, and another is a commercial switch environment. NETSECVISOR can construct a routing path in 1 millisecond, which translates to 1,000 network flows per second. Each topology’s CPU and memory consumption overhead are also assessing. When NETSECVISOR creates routing routes, it adds overhead. A comprehensive cloud network has millions of clients and virtual machines, and each routing path can be generated independently and asynchronously. NETSECVISOR prototype is easy to use, and clients can quickly build their own security rules; users have more choices for system types, traffic types, and response activities. Also, NVS can virtualize security resources and functions and provide security response functions from network devices as needed. 

Reference

Shin, S., Wang, H., and Gu, G. (2015). A first step toward network security virtualization: From concept to prototype. IEEE Transactions on Information Forensics and Security, 10(10), 2236-2249.  https://doi.org/10.1109/TIFS.2015.2453936

Summary of Yin et. al’s. “Health-CPS: Healthcare Cyber-Physical System Assisted by Cloud and Big Data”

TO: Prof. Ellis

FROM: Edward Dominguez

DATE: 3/3/2021

SUBJECT: 500-word Summary of Article About Healthcare CPS

The following is a 500-word summary of a peer-reviewed article about how Cloud and Big Data is helping the Healthcare Cyber-Physical System. The authors discuss the Healthcare CPS which is a cyber-physical system for patient-centric healthcare applications and services that is built on cloud and big data analytics technologies. The results of this study show that the technologies of cloud and big data can be used to enhance the performance of the healthcare system so that humans can then enjoy various smart healthcare applications and services. Information technology is very important to the healthcare field. As time passes more data is used than ever before, which can lead up to challenges for data management, storage and processing. In healthcare the volume of data keeps increasing as new technologies are released such as, wearable health devices, etc. It is important for medical equipment to collect data very quickly to respond to emergency. Healthcare devices create different types of data which include text, image, audio and video that may be structured or non-structured. The value from healthcare data can be maximized through data fusion of EHR and electronic medical records. Cloud Computing, big data can also help organize health care data. Even though there are many innovations in the healthcare field, there are some issues need to be resolved. Healthcare data that is stored together on the physical later are still logically separated which is an issue. The biggest challenge of building a comprehensive healthcare system is in the handling of heterogenous healthcare data that is from multiple sources. In the healthcare industry cloud and big data are very important and it is becoming a trend in healthcare innovation. Medicine relies in specific data and analysis. The system must support different types of healthcare equipment. It’s important to have different data structures to deploy suitable methods for efficient online or offline analysis. The system is expected to provide many applications and services for different roles. The data collection layer collects raw data in different structures and formats to ensure security. Data management layer which includes Distributed File Storage (DFS) and distributed parallel computing (DPC). The application service layer which gives users visual data and analysis results. There also is a data collection layer. According to the authors, “in the data collection layer, various healthcare data are collected by the data nodes and are transmitted to the cloud through the configurable adapters that provide the functionality to preprocess and encrypt the data” (Zhang et al., 2017, p. 90). Data nodes can be divided into four groups: research data, medical expense data, clinical data, and individual activity and emotional data. Digital data has been a new way for scientific research in identifying side effects of drugs and its new effects. Medical expense data is using a non-traditional healthcare data like medical insurance reimbursement and medical bills are geographically dispersed because it can estimate medical cost. Clinical data is served in many medical services like EMR and medical imaging, while keeping the privacy of the patients.

References

Zhang, Y., Qiu, M.,  Tsai, C.,  Hassan, M. M., & Alamri, A. (2017). Health-CPS: Healthcare cyber-physical system assisted by cloud and big data. IEEE Systems Journal, 11(1), 88-95. https://doi.org/10.1109/JSYST.2015.2460747

Summary of R. Zaman et al.’s “Energy access and pandemic-resilient livelihoods: The role of solar energy safety nets”

To: Professor Jason W. Ellis

From: Pavel Hlinin

Date: March 3, 2021

Subject: 500-Word Summary of Article About Importance of Solar Energy Safety

Nets

This 500-word summary discusses issues related to restricted access to stable sources of energy for some categories of people. The authors convince the reader that stable sources of energy are especially important because during the pandemic this factor exacerbated the already poor living conditions of certain groups of people. First of all, the authors tell about the third world countries with a low standard of living. The role of solar energy safety nets increased recently because solar energy safety nets give a lot of social benefits and that is an efficient way to survive during the pandemic.

Developing countries are hit hard by pandemic of COVID-19; however, available solar energy safety nets give a chance for developing countries to resist pandemics and increase their level of living. People who live in rural areas (called “last mile”) sometimes do not have access to technology that depends on energy services. At the same time, access to energy is very important and helps poor people to increase their level of education and develops their capacity to prepare for market related or natural risks. Also, access to electricity provides access to education, jobs, and allows people to stay at home and decrease the spread of the virus.

However, people from the last mile very often do not have access to electricity. In general, access to energy is expensive and often requires government subsidies and material assistance. Energy assistance programs make energy available to the poorest groups of people. Expanding the grid in rural areas is a good solution for people who live far away. Their issues may be resolved by an independent solar home system which provides energy at the household level. Many countries have their own programs that allow to extend independent home solar systems, but sometimes the national political processes delay process of providing off-grid energy access.

As discussed above, COVID-19 hits poor people and increases difficulty in paying for energy services. Some countries take the following actions to stave off an energy crisis: a 50% cut in the price of solar kits, or help companies operate with renewable energy sources. At the same time, other countries have expanded their other federal pro-poor programs and adopted over 1,000 social programs, thereby reducing funding for energy programs. That was a reason for declining some solar energy service providers and even bankrupting them. Continuous government funding of energy programs is essential to expand access to energy for the last mile people and help them better cope with the impact of the COVID-19 pandemic. A well-designed SESN program makes it possible to get out of the current crisis as it gives employment opportunities for people living in this area and, with an increase in production potential, makes it possible to earn money by selling the energy produced to other people. Also, these programs open up a spectrum of affordable services for the poor, thereby smoothing out social inequalities.

Energy poverty affects millions of people in developing countries, limiting their ability to cope with pandemics such as COVID-19. The authors draw our attention to the fact that changing priorities in the country’s policy threaten programs supporting the development of solar energy programs. The main challenge for politicians is to keep long-term goals, even in a short-term crisis. That will help people not only survive pandemic, but even increase their level of living at all.


References

Zaman, R., Van Vliet, O., & Posch, A. (2021). Energy access and pandemic- resilient livelihoods: The role of solar energy safety nets. Energy Research & Social Science, 71, 101805. doi:10.1016/j.erss.2020.101805

TO: Prof. Ellis

FROM: Remonda Mikhael

DATE: 3/3/2021

Summary of Article About Ternary Optical Computers Construction by wang ‘s, zang’s.

The following is a 500-Word summary of a peer-review article about “Ternary Optical”. The authors go into detail about the construction and theories of ternary computers by explaining how the theory and how they were tested and what were the results. 

Optical computers were gaining more attention because of the speed and power they operate at and the first ternary optical computer was proposed by Jin et al. Several theories and additions were added to the construction of the first ternary optical computer.  What had to be determined was how fast the quality of service would be which can only be tested once the construction was complete. This article will explain the four-stage design of the ternary optical computer. The remainder of this article will explain how the systems work, which algorithms used, and future potential direction it can take.

The ternary computer is broken up to three sections, each handling specific functions that also the computer to operate sufficiently. What makes this computer different from others is that it can process multiple inputs at once, reconfigure itself to understand the user’s request, and there is plenty of space in the processor to run complex algorithms. Queueing theory which is used to measure the speed of how fast the computer can operate which there are several different ways this is tested in different locations, which all of these tests are used to test this ternary optical computer. 

Two novel strategies will be presented: immediate scheduling and computing accomplished scheduling and how they are used. According to the authors, “Under the IS strategy, the data bits of the optical processor are equally divided into n parts and each part corresponds to a small optical processor that can be independently used” (Xianchao Wang et al., 9 February 2019, p.6243). It does this by sending data to unoccupied processors to so the operations can be done quickly. 

The other strategy is computing accomplished scheduling which happens after the request has been computed. This is done to make sure all requests are operated simultaneously. This is just by sending small bits of data to the processor first, then larger bits to the processor. The requests are received by the receiving server to transferred to preprocessing service to be process into a tri-value logic operator. The information received are done in a first-come-first-served basis which is then send to processor to translates the request into computer language and begins working on the requests.

The scheduling strategies and batch size has a direct result for various service processes. Using both strategies can increase the arrive time for results to be output because there are four stages each request must undergo before the request has been completed, but the increase in response time doesn’t increase dramatically because the system operates requests in parallel, so the delay is only affected minorly. Since all requests are not only operated in parallel, but are also broken into four smaller processors which is why the operating speed is quick for ternary optical computers.

Reference

Wang X., Wang X., Zhang S., Gao S., Zhang M., Zhang J. & Xu Z. (2019). Response time of a ternary optical computer that is based on queuing systems. The Journal of Supercomputing, 76, 6238–6257. 

Summary of Han et al.’s “Geosocial Media as a Proxy for Security: A Review”

To: Prof. Ellis

From: MD Jahirul Hasan

Date: 03/03/2021

Subject: 500-Word Summary of Article About Security in Social Networking

The following is 500-word summary of a peer reviewed article “Geosocial Media as a Proxy for Security: A Review” by mr.Zhigang Han, Somgnian li, caihui cui, daojun han and Hongquan Song published in 2019 identifies various prominent themes in need of more research in the continuous growth of social security concern and cybercrime management. While the majority of the people are running after the short-term solution the author takes a different approach to redefine the concept of security in social networking where the user’s privacy and security concerns play a vital role in the development of a sustainable social networking and considered geosocial media as a proxy for this security. Social networking is a set of rules and configurations designed to preserve the integrity, confidentiality and usability of all software and hardware technologies for computer networks and data. To protect it from the ever-growing landscape of cyber threats in the wild today, any company, regardless of scale, sector or infrastructure, needs a degree of network security solutions in place. In other words, the author tries to make it clear to the people that Network security is the defense against hacking, misuse and unauthorized device alteration of access to files and directories on a computer network. In specific, geosocial media when paired with location information can be used as a proxy for security event detection and security situational awareness. This paper includes a synopsis of the geosocial media data and the associated processing/analysis methods used for detecting protection events and summarize the general framework of security-related analyses based on geosocial media. According to the authors, “Social media data provide rich information that reflects people’s social behavior. In the security field, various groups of terrorists and gangs have increasingly recognized the value of social media and have actively used it to plan and organize activities, recruit members, spread terrorist ideas and publish various terrorist messages to expand their influences” (Han et al., 2019, p. 154225. Considering the economical and moral elements of an equation the authors divide the security-related analysis tasks into two types: security events detection and security situational awareness and assessment. There are six types, including natural disasters, man-made disasters, violent incidents, and military events, sociopolitical events and others security events. Turning to analysis of different networking system, the author walks an extra mile to illustrate the general process of security-related analysis based on geosocial media, and identified two types of data sets: social media datasets and auxiliary analysis datasets, and discussed the corresponding data acquisition and preprocessing methods. Geosocial networks and apps, such as Facebook locations, are designed to allow their users to share their geolocated data. Among all the Personal Identifiable Information (PII), knowing the position of an individual is one of the greatest threats against his privacy. One of the most exciting prospects for geosocial media is its ubiquity around the world, including its widespread adoption by the urban poor in many developing nations. For instance, the spatio-temporal data of a person may be used to infer the location of his home and workplace, to track his movements and activities, to learn details about his center of interests or even to detect a change from his normal behavior. The articles summarized the progress of key technologies related to security events detection and assessing security situations, including natural language processing, social network analysis, location inference and geospatial analysis, and image or video understanding and visual analysis. The paper concludes with possible future directions and areas of research that could be addressed and investigated.

Reference

Han, Z., Li, S., Cui, C., Han, D., & Song H. (2019). Geosocial Media as a Proxy for Security: A Review. IEEE Access, 7, 154224-154238. https://doi-org.citytech.ezproxy.cuny.edu/10.1109/ACCESS.2019.2949115

Summary of Ralph et al.’s “How COVID-19 affects software developers and how their organizations can help”

TO: Prof. Ellis
FROM: Benson Huang
DATE: 3/3/2021
SUBJECT: 500-Word Summary of Article on Covid-19’s effect on Software Developers

The following is a 500-word summary of a peer-reviewed article about how they investigate the effects of the pandemic on developers’ wellbeing and productivity through surveys. Covid-19 is declared a pandemic by WHO on March 11th which resulted in lockdowns and as such many were either laid off or told to work from home. Being forced to work at home without preparing there were many problems and issues rose. According to the authors “People are less likely to comply when they are facing a loss of income, personal logistical problems, isolation, and psychological stress (as cited in DiGiovanni et al. 2004)”. Smaller businesses will try to stay open and people with basic needs at risk are less likely to comply with efforts. After the pandemic we will find more businesses allowing remote working. However, this is not practical for every business as some must work in-store and some do not have a dedicated workspace at home. There are reports of remote working being more productive but some of these reports are self-reports which may be biased. Measuring productivity for software developers is difficult as different lines of code can have varying effects on their program. Yet some companies still use it as a measure for their software developers. It has been found that software developers’ well-being is closely related to job satisfaction so keeping them happy is important. A questionnaire was sent out to collect data to find if they supported some hypotheses. The target of the study was software developers who use to work in an office but now work at home, but the survey was open to all software developers. The survey is fully anonymous with a filter question for people who did not meet the requirements. Although there was no cash for doing the survey, the authors offered to donate to an open-source project of the persons choice. To get as many responses as possible the survey was advertised on many websites, had the translation for different countries and for some countries a different website had to be used because google was blocked. According to the authors “We received 2668 total responses of which 439 did not meet our inclusion criteria and 4 were effectively blank leaving 2225” (Ralph et al., 2020, p. 4940) showing that 2 of the hypotheses were supported. Some interesting patterns were found one of which was that if someone was isolated, they tend to be more afraid. Some patterns were consistent with studies of SARS back in 2004. Overall, the results showed that software developers working from home are showing less productivity and wellbeing. As such normal productivity rates during pandemics should not be expected and employees should just accept the fact that they cannot output as much work. Some things to note about the survey Google form is blocked in some countries so an alternative is needed, working with international teams for a multilanguage survey can generate large samples, COVID-19 is creating strains on businesses, organizations, and people.


Reference


Ralph, P., Baltes, S., Adisaputri, G., Torkar, R., Kovalenko, V., Kalinowski, M., Novielli, N., Yoo, S., Devroey, X., Tan, X., Zhou, M., Turhan, B., Hoda, R., Hata, H., Milani Fard, A., & Alkadhi, R. (2020). Pandemic programming. Empirical Software Engineering, 25(6), 4927–4961. https://doi.org/10.1007/s10664-020-09875-y

Summary of Lawson’s “Rational function distribution in computer system architectures: Key to stable and secure platforms”

TO:  Prof. Ellis

FROM: Ralph Ayala

DATE:2/17/21

SUBJECT: 500-Word Summary of Article About Computer Systems

The following is a 500-word summary of an article about problems regarding implementation of applications in computer- based systems. The author discusses the effects of a model that involves technology at various levels, and decisions must be made to keep a stable and secure platform. Computer systems suffer a lack of rational function distribution in the many levels of hardware and software. Rational function distribution allows minimizing the goals that are important software elements. The issue is the combined hardware and software products of the industry have not been treated with the proper elements to perform the task of creating stable connections. A model for function distribution is used for showing the effects and costs of certain levels involving hardware and software. Each level contains different materials, and uses tools for more complicated projects. A level contains its own problem of complexity from inheriting the contents due to the process of mapping. As you go up each level, the number of people becoming active in the level increases. As each level increase, the cost of complexity increases and as it gets lower it will create less complexity. Since complexities are passed upward it has caused problems for unreliable and insecure platforms. The first principle involves giving the problem to someone else who can solve it for you. The second principle is giving the user all possibilities of what to do. The third principle is using a tool that can be adapted to perform a function. The fourth principle allows whatever design mistake is made, and determines if it can fit the needs of what has to be done. Determine if the software is useful or not. If the software becomes a mess then create software that acts as a bridge between an operating system and application on a network. The use of patches has then become useful for fixing bugs instead of using a large workforce to fix it. With the effort of stable and secure platforms, complexities can be fixed without too much effort. If there is one thing that is important it is the interface between software systems, so two approaches are created. The case of IBM System/360 turned out to contain a lot of problems regarding its complexity in decision making. Due to the overwhelming problems that had occurred, customers would not have a chance to master it in their own environment. The case of Burroughs involved multiple highly advanced products without realizing the cost and reliability needed. Had there been a more strategic plan about releasing the product, technology could have been different today. The large advancements of technology in the mid 1970s ensured that the hardware- software products that can serve good functions did not survive. The focus was then placed on the performance of processors. The compatibility cost must be made to match those safety standards, so this is a time for new architectures for computer systems to arrive. Education must be applied regarding system based knowledge to computer system architects who have worked for a lot of computer systems. Here is the role and responsibilities of a computer architect. The person must find mappings of each level and distribute functions for goals. To use a structure one must be creative and it must be central to any designers. People could just easily solve it with a solution, but there is no solution that can lead to improvements. In a field like this it is important to think about scenarios that could happen. The new dominant actor reduces the complexity of stableware platforms. There is potential for some countries to reach broad solutions regarding stable platforms. “The Russian computing industry has an early history of developing hardware–software approaches, which result in significantly simpler software” (Lawson, 2006, p. 380). The dominant customer scenario has people produce a kind of trustworthy platform. This can create potential for some catastrophe in certain areas for the business. The rebirth is the best scenario for its increase in products to fight off against other competitors. Of course the amount of effort put inside the instruction sets of such hardware must be made. The amount of competition put into such computers can help advance software. Transforming the computer industry into stableware is an amazing long term goal; however today computer systems are much needed. The vice president of research, Paul Horn, made a new field for the computer industry. This Field would require a machine that can perform at its best so users do not have to concern themselves with small details. Creating that kind of system can be quite challenging for anyone to master its complexity. Rational function distribution with autonomic computing can help contain complexities today. Large amounts of code are needed in order to achieve certain functions for the software. Computer system architects must be given with the proper knowledge to ensure secure and stable platforms. Stableware could happen in the future, but the risk to accomplish it could prove to be fetal. 

Reference

Lawson, H. W. (2006). Rational function distribution in computer system architectures: Key to stable and secure platforms. IEEE Transactions on Systems, Man, and Cybernetics—part C: Applications and Reviews, 36(3), 377-381. https://doi.org/10.1109/TSMCC.2006.871571

Summary of Andrzej J. Zaliwski “Computer Network Simulation and Network Security in a spatial Context of an Organization”

TO: Professor Jason W. Ellis.

FROM: Mamadou Diallo

Date: 03/03/2021

SUBJECT: 500-Word SUMMARY of Article about Computer Network.

Competition in the current world’s business organizations has been a helping factor to push many of them to advancement. Businesses operate under the micro and macro environment. The two environment types act as forces toward the scope of changes within the organization. Zaliwski (2005) wants to communicate to such organizations concerning the Computer Network Simulation (CNS) and Network Security Auditing (NSA) that would follow the spatial pattern. In the article, Zaliwski (2005) informs that the micro issues require immediate attention from the management system. The disruption of the business organization operations by macro-threats such as the government and the competitors is under check by the law and customs. However, the micro-level of threats involves those who are not satisfied by the laws and rules that govern the procedures. An example is the computer network threats. Therefore, the suggestion would be to have professionals who have the skills to manage the computer networks that, from the nature of the current system, must wipe out its complexity of the security-related software and easing of the security auditing methodologies. Zaliwski (2005) reports that the complexity of the system and the hardness of the methodologies make it uneasy for the staff to apply it. The proposal does not neglect that the security model must align to the policies and the procedures of the organization and working hand in hand with the organizational structure (Zaliwski, 2005). Although security systems are critical to the organization’s operations, they need to be simple for usability and easy to interact with. Besides, it needs to be cheap and involving an effective and less expensive laboratory. The use of the laboratory, in this case, is research and teaching sessions for the advancement of security system related to the computer network.

As per Zaliwski (2005), the possible method to arrive at the goal is to create a virtual computer network in a physical lab. That would mean a shortening of the physical computer chains that would have added expense to the system. The system would work with open source, commercial and rare solutions. Also, the system would require graphical network visualization. It would help the professionals to understand the data connections (Zaliwski, 2005). Besides, software for network design and administration and the management part would be necessary for the system to be effective. There is no other system that would work better except the one that involves three sub-systems. They include the spatial models, the repositories, and the virtual networks (Zaliwski, 2005). The entire system would require three computers where one would serve as the host for all virtual machines. The User Mode Linux is the creator and maintenance operator of the computer. The second would connect to the virtual world, while the third would design and keep data for auditing purposes. The system that Zaliwski (2005) describes is a lightweight one and simple for the professional use. Also, the system is cheap and affordable from the micro-business firms and the teaching departments. The auditing methodologies would be simple, unlike the existing systems that keep the professionals scratching their heads. Therefore, the solution is to move the network lab from physical to virtual.

Reference

Zaliwski, A. J. (2005). Computer network simulation and network security auditing in a spatial context of an organization. Informing Science: International Journal of an Emerging Trans discipline2(7), 159-168.

Summary of Hare’s “Noisy Operations on the Silent Battlefield”

TO: Prof. Ellis
FROM: Zeela Rafija
DATE: 03/03/2021
SUBJECT: 500-Word Summary of Article about the Cyber weapons

The following is a 500-word summary of a peer-reviewed article about the Cyber weapons which can be divided into intrusive and unintrusive capabilities. The authors discuss about the cyber weapons, Battels and suggest taking preparation for them. According to the authors, “By 2014, the Russians had honed unintrusive (but noisy and disruptive) cyber operations down to a finely tuned science” (Hare, F., & Diehl, W., 2019, p.7). Summary of “Noisy Operations on the Silent Battlefield” by Forrest Hare and William Diehl . The article tends to revolve around the advocacy for the preparation in opposition to the utilization of un-intrusive precision cyber weapons via improved integration, acquisition, and training. The article is grounded on Dipert’s classification. It involves an explanation of the two classes regarding offensive cyber capabilities. Moreover, it also consists of an illustration of specimens of such attacks along with their types. The authors of the article offer reviews of several preceding conflicts. Their reviews involve un-intrusive cyber weapons that were influential in operational terms. The authors argue that in the Dipert’s Nomenclature, the cyber arms can be classified into different categories. They regard the classification as un-intrusive and intrusive capabilities. The invasive cyber occurrences are more focused as compared to that of un-intrusive attacks. They also noted that IOT susceptibility to the UPCW attacks was shown during the month of October in the year 2016. Several examples regarding UPCW usage are demonstrated in the article. Such as the examples of its use in the local battles are provided by the article effectively. The cyber-criminal organizations have conducted several cyber-attacks in the modern conflicts that required already positioned feats. The authors have illustrated several examples in this regard. Such as the illustration of cyberwar by Russia and how it attacked the transport, financial and other systems of Ukraine is demonstrated in a detailed manner. The intrusive system requires the specific attacker to gain access. On the other hand, the un-intrusive system does not require any such access. Instead, in this system, the sensor or server etc. is degraded so that it cannot function properly for a certain amount of time. The authors offer several potential advantages associated with the employment and development of UPCW. They state that many benefits can be enjoyed by the cyber belligerent when assimilating UPCW with orthodox military processes, such as the capability of un-intrusive correctness cyber weapons is less momentary. It entails less strictly expert operators as compared to that of intrusive ones. Moreover, the cyber operator who employs UPCW can measure the efficiency of weapons more directly. The authors of the study present the challenges concerning consuming UPCW in a particular conflict. The challenges posed in the utilization of UPCW cannot be ignored. Apart from the remunerations of emerging and less cultured UPCW, the implications regarding cyber defender are apparent. Such as the defender must get ready for improved opponent utilization of UPCW in future conflicts along with the periods of increased tensions. The limitations of the article involve that it does not take any stable position on the amalgamation of EW and cyber operations. The authors further imply that EW capabilities have been and will be utilized by the opponents. These EW capabilities will be used to acquire assimilated cyber effects. The findings demonstrate that concerning the emphasis on the defense, the friendly actors should contemplate developing and investigating more improved options in association with UPCW; this will help in enhancing the features of UPCW. The article also demonstrates that UPCW should be studied more deeply along with cyber-EW occurrences. 

Reference 

Hare, F., & Diehl, W., (2019).  Noisy Operations on the silent battlefield: The Cyber Defense Review,5(1), 153- 168. https://www.jstor.org/stable/26902668 

Summary of Chen et al.’s “Smart factory of industry 4.0: Key technologies, application case, and challenges”

TO: Professor Jason W. Ellis.

FROM: Motahear Hossain.

DATE: March 3, 2021

SUBJECT: 500-Word Summary of Article About Smart Factory.

This memo is a 500-word summary of the article, “Smart Factory of Industry 4.0: Key Technologies, Application Case, and Challenges,” by Baotong Chen, Jiafu Wan, Lei Shu, Peng Li, Mithun Mukherjee, And Boxing Yin. This article discusses the latest of 4 distinct industrial revolutions that the world has or is currently experiencing.

According to the research, upgrading the manufacturing industry is a combination of advanced physical architecture and cyber technologies. Those technologies are constructed with three layers, including the physical resources layer, network layer, and data application layer. The researcher Chen et al. are examining those issues scientifically and try to find supplementary solutions with references. 

The traditional industry faces threats because of the rapid change in the technology sector. Currently, another advanced system is coming with integrations of computation, networking, and physical processes called the Cyber-Physical system. This system is capable of achieving advanced manufacturing systems with big data warehouses and cloud-based computing. Several studies (Benkamoun et al., 2014; Radziwon et al., 2014; Lin et al., 2016, p. 6506) found that to build a smart factory, manufacturing enterprises need to be more advanced in the production and marketing sector. It signifies a dive advancing from more outdated automation to a completely connected and flexible system. Research by Chen et al., suggests that there are still many technical problems that need to be solved in order to build a smart factory. An example of this would be the physical resources layer. The Modular Manufacturing Unit should be a self-reconfigurable robotic system with a configurable controller system, which will have the auto managing ability to take the action like extend, replace, and so on.

According to Smart Factory of Industry 4.0: Key Technologies, Application Case, and Challenges (2018), “Morales-Velazquez et al. developed a new multi-agent distributed control system to meet the requirements of intelligent reconfigurable Computer Numerical Control (CNC)” (p. 6507). Which could utilize its system of control. Another important modular manufacturing unit is intelligent data acquisition. It includes data analysis, reporting, network connectivity, and a remote-control monitoring system. For using data acquisition, the most common wireless sensor network is RFID, ZigBee, and Bluetooth; However, Zhong et al. proposed an RFID-enabled real-time manufacturing execution system (Chen et al., 2018, p. 5608). According to researcher Zhong et al., this system is capable of making decisions and guarantee responses within specified time constraints. Also, the writer proposes to have a standard OPC UA-based interaction in multi-agent systems. With this system, multiple transport layers and a sophisticated information model allow the smallest dedicated controller to freely interact with complex, high-end server applications with real-time communication. 

Despite all of this, researcher Chen et al. has drawn attention to the fact that there are still some difficulties to build a smart factory. Like in order to have a self-reconfigurable robotic system, equipment must be smart manufacturing, and the industrial internet of things should be progressive.

Reference

CHEN, B., WAN J., SHU L., LI P., MUKHERJEE M., AND YIN B. (2018). Smart factory of industry 4.0: Key technologies, application case, and challenges. IEEE Access, 6, 6505-6516. https://doi.org/10.1109/ACCESS.2017.2783682