Beginning 500-Word Summary Project, Week 1

According to the syllabus, the 500-Word Summary project involves the following:

Individual: 500-Word Summary, 10%

Individually, you will write a 500-word summary of a technical or scientific article that demonstrates: 1. ability to identify key processes and concepts in a professional science or technology article. 2. ability to describe complex processes and concepts clearly and concisely. 3. an awareness of audience. The summary should cite the article and any quotes following APA format.

Perform the following steps to begin your project:

First, use the library’s journal databases (navigate to Start Your Research > Find Articles > A/Academic Search Complete or I/IEEE Explore) to find an article of sufficient length (< 4 pages) that focuses on a topic from your major and career. Save the PDF of the article some place safe so that you can easily return to it later.

Second, read the article from start to finish.

Third, write a reverse outline of the article by reading each paragraph again, putting the article away, typing one sentence in your own words summarizing that single paragraph, reading the next paragraph, writing one sentence summarizing it, etc. until the end of the article. Save this reverse outline someplace safe (we will be using it next week), and copy-and-paste it into a comment made to this post (click the title above, “Beginnings 500-Word Summary Project, Week 1” and scroll down to the comment box where you paste your reverse outline and then click “post comment.”).

19 thoughts on “Beginning 500-Word Summary Project, Week 1”

  1. Apple’s development of technology has been such superior in the 2021 realm and the next iPhone model would include the “S” series. The new design will implement more for people that are going to be working from home. The design is going to be smaller notch and thinner shaped models throughout all of the four models. The camera is going to be more of an advanced spec maximizing it’s megapixels and focusing more on the reverse optical zoom in the front lens. The display is going to be enhanced. 5G will be enabled along with WIFI 6E. Biometrics will be back so would conclude two options for touch ID and Face ID.

  2. Article: Social Pal: A combined platform for Internet of Things and Social Networks
    1. Internet of Things (IoT) is one of the future’s important research areas with high potential to positively impact technology development and numerous applications including smart cities, smart homes, and smart health care.
    2. The key requirement for developing new platforms and models for Internet of Things domain include: necessity to introduce new middleware for the realization of SIoT and the existing platforms are characterized by inflexible structure, especially in regards to the objective of introducing the general platform.
    3. This research’s main contributions include: Presenting a possible platform for the IoT including the functionalities needed to integrate things into a social network and identifying the right policies for the establishment and management of social relationships between various objects and the social networks.
    4. The primary advantages of the proposed platform include: Ability to define the relationship as well as create a group of objects with the same features; provides flexible structure as it connects humans and things together.
    5. This study’s approach is different from literature because the researchers are interested in the finding of the relationship between things and the owners.
    6. This paper is organized into five main sections; relationship between IoT and owners, discussion of the current platforms, proposed platform, prototype from a use case, and conclusion.
    7. For the current social IoT architecture and the platforms, this study refers readers to the recently published studies on social structure to the IoT networks in which they suggest a layered model (graphical presentation given) where the left hand side represents the client-side while the right hand side represents server-mode.
    8. In regards to the components of social Pal, the physical objects and their potential benefits (i.e desktop computers, scanner, mobile phone or printer) and the users (in a university setup include professors and students).
    9. The components of the proposed platform discussed include actor and social pal, in which actors use social pal to make new connections, and other components include interface and the internet (the internet provides the open access connection to all the objects used in the platform).
    10. Actors, as a social pal component suggests the democratic platform where both humans and things can interact and take part in the publication of data as well as receive commands for data management
    11. Interface involves the interaction between the actors and the devices which enables input of data and query requests
    12. Internet, another component of social pal, is used to promote communication between the system and the attached devices such as scanner and printers
    13. The case scenario highlights how objects are deployed in the same geographical area to create friendly relations with each other, which facilitates exchange of valuable information about the Internet of Things.
    14. The scenario involved three individuals (Emma, James, and Professor Adams) who participated in the scenario where various sensors were installed to provide key information about the environmental factors as well as exchange the friendship the controlling rooms.
    15. Pictorial illustrations show the components of social application client platform such as profiling relation management, dynamic relationship, service management functions, and group management.
    16. The process of sending and receiving data to the server begins with object writing its own API, the AP key is read to facilitate retrieval of data, and data obtained from SAS server through pull and push methods
    17. As a result of the potential benefits the current prototype presents, the future plan is to present the achieved results and its efficiency
    18. The proposed platform which is based on the social relationships among objects-similar to that of human relations inherits the key features of social networks like service composition, relationship management and recommendation.

  3. Cloud Computing services are extremely helpful to many, but are complex and expensive to begin as a new company. Instead of setting up data centers filled with expensive servers, we could instead use the Credit Union Cloud Model which would allow for us to use resources from computers that are either overabundant in resources or not being used currently. We would still need one or more machines dedicated to managing the volunteered machines (Member Nodes) which would be considered the server for the Cloud. In order to make this Cloud Model work, there needs to be software installed on all nodes involved, including the Member Nodes and the Management Node(s) allowing for the resources to be managed and allocated properly. There are quite a few decisions to make in-terms of which software to choose, with different optimizations to different hardware being utilized, but in the end the idea of data center-less cloud computing is possible and even viable.

  4. Cloud storage solution has been widely used by the companies and enterprises to put their data and information to the cloud servers. Users can upload their data on the cloud and access to the data without having any issues. however, as user data contains confidential information Network attackers target third-party cloud service providers to hack the user data. Some methods and schemes have been proposed for risk assessment of the cloud which will help cloud providers to act as a defender of security. However, Users cannot have full trust in These service providers because they may ensure the integrity and confidentiality of the data, but they may have accessed the content of the data. For example- Cloud service providers are responsible for the security of the data whereas cloud infrastructure providers make resources available on the cloud, they do not do security assessments as cloud service providers do. There is a chance of having conflicts of benefits between attackers and defenders. This conflict of benefits may drive users to think that cloud providers have a lack of appropriate assessment mechanisms. Some Third-party Service providers serve security services to cloud providers by encrypting user data. But the benefit conflicts with cloud providers and users make them semi-trustworthy the same as cloud providers. Researchers have investigated dependability and software economics, behavioral economics, and psychological security to analyze and solve the security and privacy issue, for example-game theory is one of the tools to analyze the economics of security. each person’s benefit is determined by the security level of the whole system. If the layer of security is strong, then an attacker needs to solve security mechanisms one by one which will become difficult for the attackers to decrypt it. Another point to be noted that decision-makers can be divided into attackers and defenders, Users and cloud providers can act as attackers or defenders. However, to solve this issue, game theory offers tools and models help decision-makers to make a strategy. study shows that by assessing the security of public cloud storage providers and third-party mediators through equilibrium analysis. To be precise, we conduct evaluations and assessments on a series of game models between public cloud storage providers and users to analyze the security of different services. By using the game theory model, users can analyze the risk of whether their private data is likely to be hacked by the cloud service providers. Moreover, Cloud service providers can make effective strategies to improve their service and make it more trustworthy. For example- if a cloud service provider uses a Nash equilibrium strategy and would not steal user data then the cloud system has effective internal security and confidentiality to the user data and privacy. A semi trustworthy Third-party Service providers will give additional security to the user data if users have trust in Third-party providers as same as Cloud service providers. I believe that cloud providers should emphasize strong security measurements and assessment mechanisms to protect the confidentiality, integrity, and availability of user data.

  5. Summary on: “Study on Security and Prevention Strategies of Computer Network”
    Computer network is expanding in every aspect of people’s lives; therefore, vulnerabilities and privacy are the main concerns. In order to tackle these problems, more research in specific areas is being conducted. Areas such as network management, firewall, encryption, etc. Computer network security refers to the controls on how to protect privacy, integrity, and availability. There are two types of securities which are physical and logical security. Depending upon the individual and organizations, the privacy and data can mean different things. Some just want pure security privacy, others protection. To make plans against different threats/disasters in order to continue with the network communication. Additionally, network security consists of hardware and software that protects data and addresses technical and management issues. Threats include inside people, which are internal threats, employees who leak information intentionally or unintentionally, reconfigure the network, alter or steal data, and more examples of inside threats and destruction. There is also unauthorized user with unauthorized access, such as hackers and/or users navigating unauthorized ways of getting resources. Attacks on integrity which includes manipulating data, denying access to basic operations to users and providing wrong information to end users. Attackers find ways to intercept data such as frequency, length, parameters in order to obtain valuable information which is hard to detect. Attackers can also pretend to be network control, leaders, and other entities so they can access data, use unlimited resources, denial of actual users and obtain and modify key information, passwords, etc. Attackers destroy access to user such as not letting them use daily resources nor operations. Attackers can also repeat itself meaning obtain information and sent it as many times as they want whenever they want. Other kinds of threats are computer virus, network errors, disaster, etc. There are technical protections such as firewalls, constant virus analysis, monitoring, scanning, etc. Technical level includes department of network administrators, technicians and trained users in order to maintain a system of security. In which there will be detecting virus and backing up data on time. There is network access control which is especially important because it ensures for granting access to authorized users. Backup data and recovery is important due to how administrators obtain data after an accident by using different strategies. There is application code technology which is one of the main components of information security. It provides integrity with passwords, encryptions, signatures, and different types of keys. Use antivirus programs and do not download unknown content. Research up to date and better high security operating system in order a safer environment and high performance and do not give a virus any chance. Computer network depends on network protections, security technology, implementations, measurements, laws, and regulation in order to obtain an effective security. To prevent/resists computer’s users from virus, crimes, and hackers. Users need to be educated and be aware of safety and management institution with continues improvement on technology and laws. Never-ending education to users and staff such as code, computer safety principles and to obey rules and regulations to maintain a safe and reliable working environment. Finally, there should be different types of rooms dedicated to different aspects of computer security countermeasures.

    Works Cited
    Li, F. (2012). Study on Security and Prevention Strategies of Computer Network. International Conference on Computer Science and Information Processing (CSIP).

  6. “Study on Security and Prevention Strategies of Computer Network”
    Computer network is expanding in every aspect of people’s lives; therefore, vulnerabilities and privacy are the main concerns. In order to tackle these problems, more research in specific areas is being conducted. Areas such as network management, firewall, encryption, etc. Computer network security refers to the controls on how to protect privacy, integrity, and availability. There are two types of securities which are physical and logical security. Depending upon the individual and organizations, the privacy and data can mean different things. Some just want pure security privacy, others protection. To make plans against different threats/disasters in order to continue with the network communication. Additionally, network security consists of hardware and software that protects data and addresses technical and management issues. Threats include inside people, which are internal threats, employees who leak information intentionally or unintentionally, reconfigure the network, alter or steal data, and more examples of inside threats and destruction. There is also unauthorized user with unauthorized access, such as hackers and/or users navigating unauthorized ways of getting resources. Attacks on integrity which includes manipulating data, denying access to basic operations to users and providing wrong information to end users. Attackers find ways to intercept data such as frequency, length, parameters in order to obtain valuable information which is hard to detect. Attackers can also pretend to be network control, leaders, and other entities so they can access data, use unlimited resources, denial of actual users and obtain and modify key information, passwords, etc. Attackers destroy access to user such as not letting them use daily resources nor operations. Attackers can also repeat itself meaning obtain information and sent it as many times as they want whenever they want. Other kinds of threats are computer virus, network errors, disaster, etc. There are technical protections such as firewalls, constant virus analysis, monitoring, scanning, etc. Technical level includes department of network administrators, technicians and trained users in order to maintain a system of security. In which there will be detecting virus and backing up data on time. There is network access control which is especially important because it ensures for granting access to authorized users. Backup data and recovery is important due to how administrators obtain data after an accident by using different strategies. There is application code technology which is one of the main components of information security. It provides integrity with passwords, encryptions, signatures, and different types of keys. Use antivirus programs and do not download unknown content. Research up to date and better high security operating system in order a safer environment and high performance and do not give a virus any chance. Computer network depends on network protections, security technology, implementations, measurements, laws, and regulation in order to obtain an effective security. To prevent/resists computer’s users from virus, crimes, and hackers. Users need to be educated and be aware of safety and management institution with continues improvement on technology and laws. Never-ending education to users and staff such as code, computer safety principles and to obey rules and regulations to maintain a safe and reliable working environment. Finally, there should be different types of rooms dedicated to different aspects of computer security countermeasures.

    Works Cited
    Li, F. (2012). Study on Security and Prevention Strategies of Computer Network. International Conference on Computer Science and Information Processing (CSIP).

  7. Advanced Micro Devices (AMD) have been on the CPU market against Intel and in recent events AMD has released the Zen microcomputer architecture, in which the Ryzen processors have been established. AMD has reached the same amount of physical cores they can fit in a package as Intel. They were tested each in benchmark programs to put the chips to the most output. Both chips utilize the same x86 instruction set and are manufactured with 14nm transistors. Ryzen has been equipped with more cache than its Intel counterpart. Ryzen processors are loaded with less output ports on the scheduler which is the creating a functioning system of setting processes in order of execution and priority. While the Intel chip has more outputs built in, its scheduler is unidentified and no information about it has been available. The Intel chip has a function called turbo-boost which increases the clock speed at a certain threshold unlike the Ryzen chip which is locked.
    Software that was used was SPEC a benchmark testing program with multiple test scenarios that can exhibit real life stimulations of utilizing CPU power. The series of tests consists of how fast the CPU can process the instructions as if it were in a real life simulation. Both test benches utilized the same Linux OS (Ubuntu) and had the same amount of DDR4 RAM which was 16GB. The test results show that both excel performance however the intel chip was consuming more energy than Ryzen. Mutithread stimulations were difficult to properly exam due to synchronization. After tests it shows that Ryzen performs better in multi threaded tasks than the Intel’s Core CPU and consumes more energy. In conclusion, Intel’s 8th gen i7 processor out performed in tests but consumed more energy than Ryzen. Both chips performed the same but there are differences in the way they are manufactured

  8. Reverse Outline
    Article name: MOBILE APPLICATIONS FOR BUSINESS
    By, N. Angelova*

    1. INTRODUCTION: Mobile phones are one of the most common and easy to use device, not only for regular uses but also for business purposes.
    2. MOBILE TECHNOLOGIES: Mobile phones can be described as a part of technology that involves mobility, uses communication infrastructures, protocols, and portable devices used for cellular communication and allows users to perform various tasks flexibly in terms of time and place.
    3. MOBILE OPERATING SYSTEMS: There are different types of OS that are used for Mobile phone such as, iOS, Android, Windows, Samsung, BlackBerry etc.
    4. MOBILE COMMUNICATIONS: The development of mobile technologies, operating systems and devices helps build the development of mobile communications which allows users to communicate with others located in different places without using any physical connections like cables and wires.
    5. MOBILE APPLICATIONS: There are different types of mobile applications that are developed depending on the OS.
    i. MOBILE APP STORES: There are different types of store depending on the OS such as, Google play (Android), App Store (iOS), Windows 10 Mobile (Microsoft store) etc.
    ii. MOBILE APPLICATIONS FEATURES: Features are the most important thing in a mobile device because an user buys a mobile phone to use its features.
    6. MOBILE APPLICATIONS FOR BUSINESS: Some of the importance of mobile phone in business are, creating a direct marketing channel, Visibility of the business to customer, building a brand for recognition, Improving customer engagement etc.
    7. Conclusion: Mobile devices are an integral part of our lives and it seems like everything we possess is inside them behind the screen which allows us to see the world in our hand.

  9. AIR5: Five Pillars of Artificial Intelligence Research

    Reverse Outline:

    i. Introduction

    a. The main goal of artificial intelligence was to have a similar level of intelligence as humans. Despite their modest beginning, they can beat human intelligence, such as IBM Watson winning Jeopardy, AlphaZero algorithm defeating world champion in the game of chess with the help of machine learning.

    b. Due to the many benefits of AI, people believe it has a significant impact on society. AI has enormous potential to improve human decision-making in health care, economics, and governance, etc. But, for people to widely trust AI there are some challenges such as its rationalizability, resilience, reproducibility, realism, and responsibility.

    ii. Rationalizability of AI Systems

    a. Deep natural learning is a part of machine learning that AI uses to constitute a human brain. It was criticized for being highly vague because there is no explanation of why certain inputs lead to the observed output; even though they accomplish remarkable prediction accuracies.

    b. For people to accept the modern AI system, it needs to have the ability to rationalize its interpretation and explanation. Otherwise, it will compromise the safety of critical applications such as medical diagnosis, autonomous driving, where people’s lives will be in danger.

    c. Interpretable and explainable AI is the core of rationalizability. Although Deep natural learning is one of the best parts of machine learning yet, it does not satisfactorily represent uncertainties.

    iii. Resilience of AI Systems

    a. Despite having massive improvements in AI, still, they tend to be easily fooled. For example, in the β€œstop sign” if someone adds two black and white stickers, then AI may recognize it as β€œspeed limit”.

    b. Many steps have been taken to make DNN more resilient, but the core issues are yet far from being taken care of, and they need significant research attention.

    c. The goal of an attacker is to introduce a new dataset or alter the existing one, which forces the learners to learn a bad model. Therefore, the need to take measures against malicious data poisoning attack is very crucial for secure AI.

    iv. Reproducibility of AI Systems

    a. It is necessary to ensure the reproducibility of AI applications to maintain their trustworthiness by designing and complying with standardized software requirements.

    b. One of the key obstacles in the way of reproducing recorded results effectively is the vast number of hyperparameters. The absence of expertise in the hyperparameter selection may lead to the poor results of the trained model.

    c. There is a continuous effort to build an algorithm that can transfer, and reuse acquired information automatically through issues. The main goal is to improve AI’s generalizability.

    d. There is a growing collaboration of open-source software development in the field of AI. Still, there is a need for generally agreed software standards to be specified.

    v. Realism of AI Systems

    a. AI has shown significant improvement when it comes to human interaction, it even provides psychological support to Syrian refugees using chat-bots. But a balance must be maintained between the constant drive for high precision and automation.

    b. One of the key challenges AI face is the development of the system. Every human has different ways of expressing themselves, such as speech, body language, facial expression, etc.

    vi. Responsibility of AI Systems

    a. Another key concept of AI should be responsible; it’s a discipline of how one should act towards others. And it must be integrated into all levels of the AI System.

    b. No doubt on the ability of AI to learn complex problem and solve it, which often exceed human performance levels. But what alarming is that robots taking over the world is a hot topic nowadays.

    c. There are many different types of concerns when it comes to ethics, such as how AI will handle privacy, people’s data, and how it will comply with humanitarian laws, etc.

    d. Building ethics into AI systems is unquestionably a significant investment.

    vii. Conclusion
    a. It is very important to make sure AI systems cover all the concepts mentioned above from Rationalizability to responsibility for it to function reliably and ethically.

    References

    Ong, Y.-S., & Gupta, A. (2019). AIR5: Five Pillars of Artificial Intelligence Research. IEEE Transactions on Emerging Topics in Computational Intelligence, 3(5), 411–415.
    https://ieeexplore-ieee-org.citytech.ezproxy.cuny.edu/document/8782800/references#references

  10. IoT was created in 1999 and was officially launched in 2005 by the International Telecommunication Union, it was coined by Kevin Ashton. It has physical characteristics and virtual representation. The main idea of IoT is to convert miniature devices into smart objects and make them dynamic, it represents the parent class and enables intelligent communication with the information network. last few years, the concept of IoT has been applied in greenhouse monitoring, smart electric meter reading, telemedicine monitoring as well as intelligent transportation. With 5G network, IoT will make the online worlds much stronger but the biggest threat is to protect privacy. With 5G network, IoT will make the online worlds much stronger but the biggest threat is to protect privacy. The main goal of this study is to analyze the defense against IoT related challenge. Potential solutions to the security threats of IOT and review method to gain insight on the practical implication of security in the IoT. Patient data and staff can be monitored automatically using Iot technology. Many applications of Iot are used in smart cities. Protecting privacy is an important issue in digital environment but there is a risk of individual breaches of each device in the IoT network. In order to protect the authenticity of the information, only authorized users need to exchange information. The main goal of Iot is to provide accurate data to the user. IoT information must have protected so that no one can steal, delete or edit anything. Nonrepudiation is related to authentication of a legit party in getting access to the promised service.
    CYBER-ATTACKS ON IoT APPLICATIONS:
    Sinkhole Attack creates the network traffic and collapses the network communication. Sinkhole attack creates counterfeit notes and sends route requests to neighboring notes.
    Wormhole attack is an internal attack that make it very difficult to identify the attack as a result of unchanged network activity.
    In a selective forwarding attack, a compromised node refuses to forward some of the packets in its outstanding buffer, such as control information or data packets in order to cut off the packet’s propagation.
    Sybil Attack can create wrong reports, increase traffic load with spam and loss of privacy
    Hello Flood Attack, usually within the range of the receiver’s device and can transmit to the receiver when it is incorrect.
    SET OF SECURITY REQUIREMENTS:
    β€˜Internet of Things’ A study titled IoT has presented a secure model of how data can be kept secure. The technology used in the network in this study emphasizes the need for a legal framework in accordance with international standards. Zhuo and Chao security system was disrupted there. They are added encryption methods, communication medium security, use of cryptography and protection of sensor/control data for tackling the Major security and privacy.
    Internet security issues presented a paper entitled certification approach. Abomhara and Koien
    Author discusses IoT and provides future direction for tackling current and future privacy initiatives. IoT will connect billion of devices, these have achieved the requirement for protection. Overall, this study discusses in detail the security threats to the IoT environment and solutions.

  11. Network Attacks and Prevention techniques – A Study
    Introduction – network is more considered as the user accesses to the platforms by the username and password, and the platform which is protected by firewall and antivirus software.
    Network security – There are two types of network security such as hardware security and software security. Hardware security like defensive systems is often used in corporations, and software security are applications only for dividual or small firms used.
    Literature survey – Open Web Application Security Project (OWASP) is a foundation dedicated to improving the security of software by collecting vulnerabilities and attacks on a daily basis. The four main parts of OWASP are integrity, authorization, authentication and secure control.
    Different Type of Attacks – a type of attack that attackers can be used to gain unauthorized access to the systems which if the security software is outdated or not in place.
    Network attacks – There are two ways of network attacks, passive way, and active way attack. A passive way attack is more like a man in the middle attack which gains the information during the process of transfer, and an active way attack is an attacker proactively attacking the system by injecting unauthorize codes.
    Browser attacks – the most frequent web browser base type of attack that the attacker uses to hack into the system by adding malware to the browser.
    Worm attacks – worm attack is the type of attack that replicates itself to spread from computer to computer without external integrate. WannaCry ransomware is a well-known worn attack that affected over half-million computers nowadays.
    Malware attacks – three common types of malware attacks, phishing emails, malicious websites, and malvertising.
    Identity spoofing – an attack that attacker pretending as a victim to gain access to the confidential data by fake the victim’s real IP address to fool the authentication system.
    Sniff attacks – sniff attack is capturing the data packets when it transfers through a computer network, and a sniffer is a device or application that used to do this type of attack.
    Man in the middle attacks – MITM is an attack that the attacker interrupts the confidential data during the transmission process of two victims and access to the data without the awareness of the victims.
    ARP attacks – address resolution protocol attack is a kind of sniff attack that attacker able to capture the data of a packet during data transfer by transform the data or transform the MAC address and replace it with the attacker’s own MAC address.
    Botnet – Botnet is the formation of robot and network, which is one of the main attacks that attacker uses to gather unauthorize confidential data.
    DNS spoofing attacks – domain name server spoofing attack is an attack that the attacker alters the IP address of the domains on DNS and redirects all the data from that IP address to an IP address that assigned by that hacker.
    Backdoor attacks – a backdoor attack is an attack that the attacker generates a backdoor using a fake script to gain access to the service by forged a link or document for the victim to open without any suspicion.
    Preventive measure for some major attacks
    Man in the middle attack –To prevent MITM attack, the two endpoints should use a higher secure network when communicating and encrypted the transmission by using any encrypt protocol.
    Strong encryption – strong encryption can protect the attacker from easily gain access to the credential data, in which strong encryption is very tough to crack or just decode it.
    VPN – virtual private network provides a private network like a bridge or tunnel for two or parties to communicate without afraid of any attack like MITM or stiff attack.
    HTTPS – HTTPS is providing a higher secure network over the browser by issuing the certificates to only the participating entities and verified at each party before the transmission.
    Backdoor attack – install web application firewalls can stop the ability of an attacker to forge a link for the victim to open and generate a backdoor from it.
    Botnets – To prevent botnet kind of attack, the user should make sure the intrusion system is up to date and specifically configure the ports or shut down the ports that not currently in use.

  12. Artificial Intelligence: Benefits and Unknown Risks
    By: Dixon Jr. R

    1. Introduction: Artificial Intelligence have been a great help to society and its people however, there are many risk factors that have not yet been officially acknowledged.

    2. AI for eDiscovery & Document Review: AI has the ability to be efficient and effective with resources to decrease the work of manual laborers.

    3. AI in Law Enforcement – Predictive Policing: AI have been assisting the world of law enforcement by their facial recognition and crime prediction algorithms. Law enforcement agencies use Predpol, an AI algorithm that contributes in predicting crimes for certain areas on a daily basis.

    4. AI for Crime Solving: Researchers are introducing algorithms to determine class of weapons based on gunshot audio. These algorithms will be able to recreate crime scenes by using certain data to help investigators better understand the event.

    5. Judicial Use of AI for Risk Assessment: COMPAS is an AI driven assessment tool to reassess defendants in criminal cases.

    6. AI’s Predictive Policing Risks: The Department of Justice discovered repetitive law violations by the police regarding the use of extreme force against Black people, minority groups and failing to report women violence issues. Evidence indicates that the developers of the algorithm did not provide accurate historical data.

    7. Judicial Use of AI: Supporters of AI think that this is the only way to decrease human error and bias in official courts. However, studies show that AI has actually revealed bias in the past, mistakenly claiming Black defendants as potential criminals more than white defendants.

    8. Crowdsourced Risk Assessment vs AI: There was a study conducted in 2017, where they recruited volunteers online, asking them to predict whether certain individuals are likely to repeat a crime. The crowdsourced predictions were as precise as COMPAS at predicting repeated offenders.

    9. Judicial Discretion vs AI: Although AI algorithms are great at recognizing consistency in data and are able to generate predictable results, consistency and predictions are not the same as being fair. Technologies and machines struggle to operate in a world where biases and prejudices exist.

    10. Conclusion: AI plays an important increasing role in our lives and the criminal justice system. However, it can sometimes do more harm than good and we as a society need to re-evaluate the uses of AI.

  13. Machine Learning and Deep Learning Methods for Cybersecurity
    The Internet is transforming how people learn and operate, but it also leads us to further critical security issues. Cybersecurity is a sum of technologies and processes established to prevent computers, networks, applications, and data from threats and unauthorized intrusion, modification, or destruction (Aftergood, 2017). Network security systems include firewalls, antivirus software and intrusion detection systems, and IDS (intrusion detection systems). Although there are three major types of network analyzes for IDSs: misuse-based, often defined as signature-based, anomaly-based, and hybrid. Security breaches include external intrusions and internal intrusions. This article describes a literature analysis of machine learning (ML) approaches and deep learning (DL) for computer security implementations mostly focused on the last three years. It emphasizes on ML and DL network security measures, ML/DL approaches, and their explanations. This article aims to provide resources that are interested in studying network intrusion detection in ML/DL. This article explains misuse detection as well as anomaly detection. The ML and DL methods covered in this paper apply to intrusion detection in wired and wireless networks.
    AI is an advanced technology discipline that explores and advances ideas, processes, strategies, and software that replicate, extend, and expand human intelligence (Smith & Eckroth, 2017). ML is a section of Artificial Intelligence (AI) that aims at making forecasts with computers. ML emphasizes regression and classification based on established features already acquired from training results. ML may also be unattended and then used to learn and build simple behavioral profiles. DL is a growing paradigm for machine learning science. DL is a machine-learning approach focused on data learning characterization. Its motive resides in forming a neural network that replicates an analytical learning mechanism of the human brain. It stimulates the human brain system for processing data such as pictures, sounds, and texts. The critical distinction between deep learning and mainstream machine learning is its efficiency as the amount of data expands. The ML and DL methods do not work without representative data, and obtaining such a dataset is difficult and time-consuming (Sneps-Sneppe et al., 2020, p. 55).

    A computer network’s security data commonly be collected in two different ways: 1) directly and 2) using an existing public data set. This article examines a large number of academic intrusion detection studies based on machine learning and deep learning. It addresses a few of the difficulties in this field of study, such as benchmark datasets are very few. Still, the same dataset has been used, and the extraction procedures used by each organization differ. Trends in intrusion detection are also mirrored in the fact that the analysis of hybrid models has become trendy in recent times.
    The most effective approach for detecting intrusions has still not been developed. Each approach to the implementation of an intrusion detection system has distinct advantages and drawbacks, as can be seen by the discussion of comparisons between different methods. Datasets for network intrusion detection are significant for training and testing systems (Gasu, 2020, p. 249). There are several problems with the new public data collection, such as variable data, incorrect content. Network details upgrade very rapidly, bringing instruction to the DL and ML models and using them with difficulties, the model has to be deployed long-term and quickly.

    References
    Aftergood, S. (2017). Cybersecurity: The cold war online. Nature, 547(7661), 30–31. https://doi.org/10.1038/547030a
    Gasu, D. K. (2020). Threat Detection in Cyber Security Using Data Mining and Machine Learning Techniques. Modern Theories and Practices for Cyber Ethics and Security Compliance, 234–253. https://doi.org/10.4018/978-1-7998-3149-5.ch015
    Smith, R. G., & Eckroth, J. (2017). Building AI Applications: Yesterday, Today, and Tomorrow. AI Magazine, 38(1), 6–22. https://doi.org/10.1609/aimag.v38i1.2709
    Sneps-Sneppe, M., Namiot, D., & Pauliks, R. (2020). Information System Cyber Threats and Telecommunications Engineering Courses. Latvian Journal of Physics and Technical Sciences, 57(1–2), 52–61. https://doi.org/10.2478/lpts-2020-0007

    Article
    Xin, Y., Kong, L., Liu, Z., Chen, Y., Li, Y., Zhu, H., Gao, M., Hou, H., & Wang, C. (2018). Machine Learning and Deep Learning Methods for Cybersecurity. IEEE Access, 6, 35365–35381. https://doi.org/10.1109/access.2018.2836950

  14. β€œAdaption of a Secure Software Development Methodology for Secure Engineering Design”

    I. INTRODUCTION
    A. With the rise of IoT devices, old and unsecured systems and software have been more vulnerable to attacks.
    B. Originally IT was in charge of cybersecurity within the systems, however, as more engineering is being combined with IT infrastructure, engineers need to learn how to incorporate it as well, since it’s no longer the job of only IT professionals.
    C. The International Council of Systems Engineering (INCOSE) started the processes to incorporate cybersecurity into its systems lifecycle as well as many have proposed the Secure Systems Development Life Cycle (S-SDLC) to be included as a method to secure the systems, however, not much has changed since.
    D. At Undergraduate level, engineers are told about the cyber threats of building a poor system or software but aren’t taught of a way to fix it.
    i. Asses how much final year software engineer’s students know.
    ii. Propose a way to help engineering students to secure their work by implementing security into the engineering systems life cycle.
    II. BODY 1(METHODOLOGY)
    A. To make a helpful approach to teach final year students how to incorporate security into their engineering projects, first they need to know how much they know, in the form of an analysis, in order to create a baseline.
    B. Even though this study was conducted from only one institution, it was designed to create a broad baseline in order to understand if students had the knowledge on software security engineering.
    C. Based on the content analysis results a simple and practical plan was created to help engineering students learn how to implement secure software development through a serious of methods from the secure software development methodology (SecSDM).
    III. BODY 2 (BASELINE STUDY: ENGINEERING STUDENT SECURITY UNDERSTANDING)
    i. The final year project was a serious of steps in the project cycle design for engineering students to learn how to solve problems they will face as well as future problems in their careers.
    ii. The Capstone project did not focus on a specific software security but rather through the process of the normal life cycle of a project, especially focusing on hardware, software design and testing.
    B. Table 1 show all the words and phrases use for the analysis obtain from the (SecSDM).
    C. Table 2 shows the results from the content analysis and shows that when it was referred to security or software itself there were not many counts, resulting in a more accurate baseline.
    1. Based on table 1 it’s noticeable that the word software and security are not associated.
    2. Table 1 shows a high count of terms used however when it is referred to software security the word count decreases drastically.
    ii. Results showed that students were familiar with security, integrity and identification but not in the context of software, since no complex terminology regarding basic software was little to not used at all.
    iii. The results from the content analysis showed that engineer students did not considered software security on their final project since it was not a requirement.
    D. After the project a survey was used as a way to find out if the students understood the terminology and implementation of security as we can see below.
    1. General information describing the student’s engineering project.
    2. Question about risk assessment regarding the implementation of software security in the project.
    3. Opinions on software security on engineering projects and the understanding of software security implementation.
    ii. The survey showed that students understood the importance for software and security as well as the concepts of authentication, and identification within their projects.
    iii. Students admitted that they were not familiar with many security terminologies as well as knew how to implement the very basic security mechanisms in their projects.
    iv. Students stated that since they were not trained in security, they were not familiar with cybersecurity, while some believe that they should get trained, others believe that professional IT and Software engineers should be the ones who should identify the risks and should be able to fix it.
    v. The survey confirmed the results obtain from the content analysis in which it is noticeable that students understand the importance of security, but the lack of knowledge and the lack of training cause them to not feel responsible for it.
    vi. Basic education in software security services and mechanisms in engineering projects should be given to engineers, since this will cause for students to be more comfortable to take responsibility as well as implement security features.
    IV. BODY 3 (SECURITY ACTIVISTIES IN ENGINEERING DESIGN)
    A. Even though there exist many variations of engineering design processes, the most common one it the System Development Life Cycle (SDLC) and the many variations of the SDLC.
    B. Even though there exist many writings on the development and implementation and use of Security Systems Development Life Cycle, currently there is no existence of standards for the S-SDLC, however, table 4 shows what some experts consider for the implementation of security within SDLC.
    C. Even though there exist many frameworks on how to implement security into engineering design the real problem is in the implementation part since there in no clear guideline as how to do it.
    i. The SecSDM methodology together with a software tool were develop as a practical approach to implement security into software design.
    V. BODY 4 (MAPPING OF SECSDM TO SDLC FOR ENGINEERS)
    i. Undergraduate engineering programs neglect to teach students the importance of integrating security into their engineering design.
    ii. The use of the SecSDM might be a good fit to give to engineering students since it will provide students with a practical guideline of how to implement the security into their products or systems.
    iii. Table 6 shows details regarding an adaptation of the SecSDM to the engineering field to form the E-SecSDM.
    iv. Table 6 briefly discusses the Key Security Engineering Activities by using examples from engineering and stating how it can differ from software development.
    B. In the exploration phase, the engineer is mainly concerned with the exploration of technology readiness as well as has to conduct a risk analysis to consider all possible risks regarding the project as stated in the ISO/IEC TR, in which it includes things like confidentiality, integrity, availability, accountability, authenticity and reliability of information.
    i. An example of confidentiality and reliability being violated if certain requirements are not taken into consideration during the design process.
    C. After the pre-evaluation, technology readiness, and risk assessment, the engineer must find possible solutions and define the systems requirements and products specification as well as identified the security risks from the security requirements.
    i. Going back to the example, if the engineer doesn’t want to violate any of the requirements then the engineer might have to implement a more secure and strict access control/authorization security service as well as ensure that the data cannot be tampered with.
    D. Usually, the goal of the design and development phase is for engineers to design the systems architecture and the security in this stage must be mapped to specific security mechanisms, however SecSDM recommends encipherment, access control, and many more things during this stage.
    i. Going back to the example, a way for the engineer to maintain records confidential is by the utilization of encryption as well as the implementation of strict access control for security mechanisms.
    E. Although, in the production and implementation phase involves the construction of subsystems, systems integration and testing, the E-SecSDM should be implemented during this phase since many engineers are not knowledgeable in the secure coding standard and with the E-SecSDM its easier for engineers to identify possible security tools and components.
    i. Going back to the example, during this phase of the test engineers need to do testing and verification in order to see which areas need the implementation of security controls as well as see if all subsystems are abiding by the rules of coding and if they are able to withstand an attack.
    F. During the utilization and support phase, the engineer is responsible for the product to operate based on the user’s need as well as is responsible for the continuous monitoring of the software and firmware in order to make sure that the product is secure and well use.
    i. For example, software functions of a fitness tracker must be kept up to date, and even though the responsibility of updating the database might be of a professional, the responsibility of keeping the phone or tracking device updated is of the user.
    G. Even though the SecSDM didn’t have a specific phase for the retirement part, in engineering the retirement phase of a project leaves behind an artifact that may contain sensitive and private information.
    i. Going back to the fitness tracker example, during this phase all the memory stored in the database must be deleted as well as the developer must inform user how to dispose of product in the best way, which can include things like deleting an app if an app connect to the fitness tracker was used as well.
    VI. CONCLUSION
    A. Even though this paper, motivated various people to write proposals on the implementation of secure software practices into the engineering design, there’s still no practical approach on how to do so.
    B. SecSDM can provide a practical approach for engineers to integrate security throughout the product while E-SecSDM can provide clear breakthrough for each phase of the SDLC, especially for engineers not familiar with the importance of the inclusion of security in their products design.
    C. The first proposed step for the inclusion of software security practices is to introduce final year engineering students to the integration of security using E-SecSDM as well as conduct a case study where they adopt these guidelines throughout the processes.
    D. Even though many engineering programs still neglect to address the importance of inclusion of security into their engineering design, many believe that students should be introduce to cybersecurity in their early years when they are first learning software design, while others believe that the E-SecSDM guidelines be integrated early so engineering students learn how to consider and include security.

  15. VMobiDesk: Desktop Virtualization for Mobile Operating Systems
    Authors: Kui Su, Peiyu Liu, Liang Gu, Wenzhi Chen, Kai Hwang, Zhibin Yu.

    i. Introduction
    Mobile devices have been widely more available.
    Bring-Your-Own-Device encourages employees to bring their own devices to work.
    Security and Privacy Concerns: As people use their personal devices at work, data is more vulnerable.
    VDI ensures that corporate data is stored on the company’s servers and not the personal device storage.
    There are limitations to smart devices that can make them different from actual PCS.
    Mobile cloud computing was introduced to combat the limitations.
    VMI makes the user experience safer on their mobile devices.

    ii. Section II
    VMI provides a smooth local OS experience, and the key is to host a mobile OS on the cloud server.
    There are three parts of a VMI.
    vMobiDesk is a VMI system to provides mobile users with remote access to virtual desktops.
    There are many display protocols for mobile VDI system

  16. Exploring the Merits of NoSQL: A Study Based on MongoDB
    I. Introduction
    a. Big data is the growth of data. The increase of data is causing issues on how to store. Relational database management systems can only store schema if they are in a specific form. Non- relational databases are the solution for storing data with no structure and a high volume of data.
    b. There are different types of NoSQL databases. One database being MongoDB. Document NoSQL databases are important because it is useful for content management, blogging, and other aspects.
    II. Related Work
    a. A lot of research and studies are done about NoSQL databases due to their need for data diversity. Some studies that have been done are integrated NoSQL and MongoDB by adding a middleware between them and comparing NoSQL databases.
    III. NoSQL Databases
    I. Classification of database
    Different types of databases are:
    a. Key valued databases are the commonly used databases and used to do operations on the database based on a key-value pair.
    b. Document database store and recover XML, JSOB, and BSON documents.
    c. Colum family stores are databases that store data in column families as rows.
    d. Graph databases stores entities and connection or relationship between them
    II. Advantages of NoSQL
    a. Elastic scaling allows the business to grow.
    b. NoSQL can manage big data.
    c. NoSQL requires less administration.
    d. NoSQL is less expensive due to it managing at an average cost.
    e. Changes in NoSQL can be done quickly.
    IV. MongoDB
    a. MongoDB is a document database, and it stores collections instead of a table.
    b. Some essential features about MongoDB is are flexibility, rich queries, sharding, ease of use, high performance, high availability, and support of multiple storage engines.
    V. Data storage in MongoDB in JSON format
    a. JavaScript Object Notation is used in MongoDB due to its being uncomplicated for humans to read, and it supports all fundamental data types.
    VI. Conclusion
    a. NoSQL databases are useful when dealing with unstructured, semi-structured, and structured data. NoSQL also has less schema compared to relational databases.

  17. This research paper discusses the implementation of a microservice architecture within the IT Helpdesk system.
    IT Helpdesk is software that is used to help users troubleshoot their problems, and receive assistance in regards to their products/services.
    Microservices is a collection of services (isolated units) that is structured by an application that is implemented for business capabilities.
    MIcroservices can be scaled independently into three types: X-axis, Y-axis, and Z-axis.
    Microservices allows for the development of new features and continuous deployment of large systems without affecting the older section.
    The structure of IT Helpdesk that is implemented with Microservices contains four services: classification, ticket, feedback, and analysis.
    Users use the interface to initiate a service request through the creation of a ticket and must fill out their: personal info, subject of the request, and level of impact, this information is used to process through the four services.
    As a user creates a service request ticket, it will go through Classification for subject category, to Progress for the status of the ticket as if it was assigned to an IT team, to Solved for the status of the ticket as the IT team provided a solution, to Closed if the solution is accepted or rejected by the user.
    The user can accept or reject the solution provided by the IT team, as if it were to be accepted the service ticket is “Closed”, if rejected then the status of the service ticket is “Progress” again.
    A Thesaurus database is used to classify the subject of the service request in order to categorize it in IT helpdesk areas.
    After a category is received, the service request is added to the Ticket database and into the table “tbl_txn_service_request_ticket” by using the Ticket Service.
    If the solution is accepted by the user, the ticket will go through to the Feedback service and record the ticket as “Closed”, or if rejected by the user the ticket is sent to the Feedback service and the IT team is notified.
    A Summary report is generated for the details of the service request tickets and the data is sorted by fields, with the report representing as evidence of performance of the IT helpdesk and also informs about the user’s experience and problems that are commonly requested .
    The Analysis service connects data received by a generated Summary report to the Report database that would satisfy the parameter and store the results.
    The classification of the subject of service requests is evaluated through the use of a simulation of the calls with the Classification service with results determining that more than 70% of categories are assigned correctly to the subject of the service requests, with the 30% not correctly assigned being a result of the thesaurus being incomplete .
    Each of the four services in Microservices is deployed and scaled on their own.
    The purpose of a microservice architecture is to allow for distribution and continuous delivery with each service running independently.

    Works Cited:
    R. Wongsakthawom and Y. Limpiyakorn, “Development of IT Helpdesk with Microservices,” 2018 8th International Conference on Electronics Information and Emergency Communication (ICEIEC), Beijing, 2018, pp. 31-34, doi: 10.1109/ICEIEC.2018.8473557.

    1. I had to change my article as I noticed I used a conference paper instead of a journal article.—

      Review of Artificial Intelligence Techniques in Imaging Data Acquisition, Segmentation, and Diagnosis for COVID-19

      I. Introduction
      ● As of April 2020, there have been over a million cases and over 85,000 deaths of COVID-19, a severe respiratory syndrome, that has become a global pandemic that has been in over 200 countries.
      ● Medical imaging such as CT scans and X-rays have been found to play a critical role in restraining the transmission of COVID-19 and in fighting it.
      ● CT scans is one of the imaging-based diagnosis that is used for COVID-19 and includes three stages: pre-scan acquisition, image acquisition, and disease diagnosis.
      ● Artificial Intelligence has made a huge impact in contributing in the fight against COVID-19 as it allows for safer, accurate, and efficient imaging solutions.
      ● The journal’s goal is to further discuss the use of medical imaging with artificial intelligence in fighting against COVID-19 and discuss machine learning methods in the imaging workflow.

      II. AI-Empowered Contactless Imaging Workflows
      1. Imaging facilities, and workflows should also be considered to be important to reduce the risks and save lives from COVID-19.

      A. Conventional Imaging Workflow
      ● It is crucial for the use of contactless and imaging acquisition in order to reduce the risks of technicians and patients being infected as there is contact between them.
      B. AI-Empowered Imaging Workflow
      ● Artificial intelligence can be used to help the contactless scanning as it will be able to identify the pose and shape of a patient by using data from visual sensors.
      ● Scan range, the start and end point of a CT scan, can be estimated by the use of visual sensors with artificial intelligence, and scanning efficiency can be improved while reducing radiation exposure.
      ● There are other scanning parameters that use artificial intelligence such as ISO-centering which aligns the targeted area that is scanned with the scanner ISO center, used to have optimal image quality.
      ● An imaging model called parametric human model SMPL along with the use of a RGB-depth sensor can still perform a 3D patient body inference regardless if a sensor fails.

      C. Applications in COVID-19
      ● Contactless imaging workflow was established in the time of the COVID-19 outbreak.
      ● A mobile CT platform with artificial intelligence implemented, is an example of an scanning automated workflow as it allows for the prevention of unnecessary interaction between technicians and patients.
      ● The patient poses on the patient bed and waits for the technician to observe the patient through live video, then once the patient is found to be ready, the patient positioning algorithm will capture the patient’s pose.

      III. AI-Aided Image Segmentation and Its Applications
      ● Segmentation is crucial in image processing and analysis in order to assess COVID-19 as it covers the region of interest (ROIs) (organs that are affected by COVID-19/ infected areas).
      ● CT (computed tomography) produces high-quality 3D images, and ROIs can be segmented into it.
      ● There are many segmentation papers that are directly related to COVID-19, which some view segmentation necessary when analyzing COVID-19.

      A. Segmentation of Lung Regions and Lesions
      ● There are two categories that segmentation methods in COVID-19 applications are grouped into: lung-region and lung-lesion oriented methods.

      B. Segmentation Methods
      ● There are many techniques used for lung segmentation such as U-Net, a U-shape architecture with symmetric encoding and decoding signal paths, used in segmenting lung regions and lesions.
      ● There are variants of U-Nets such as the 3D U-Net, a V-Net, a VB-Net, a UNet++, and the Attention U-Net.
      ● Proposals such as human knowledge,and machine learning methods can be integrated with a segmentation network in order to allow for adequate training data for segmentation tasks.

      C. Applications in COVID-19
      ● COVID-19 applications that have diagnosis reported can have segmentation used with different proposals for its use.
      ● Quantification is another type of image segmentation with different proposals and uses listed.
      ● Image segmentation is crucial in COVID-19 applications as it allows radiologists to accurately identify lung infection, as well as analyzing and diagnosing COVID-19.

      AI-Assisted Differential Diagnosis of COVID-19
      ● Patients that are suspected of COVID-19 are in need of diagnosis and treatment, and with COVID-19 being similar to pneumonia, in which AI-assisted diagnosis using medical images can be highly beneficial.

      A. X-ray Based Screening of COVID-19
      ● X-ray images are less sensitive as opposed to 3D CT images as they appear normal in the early stage or mild state of a disease.
      ● There are many radiological signs listed including airspace opacities, ground-glass opacity, and etc.
      ● An experiment was conducted using a Bayesian Convolutional Neural network to estimate the diagnosis uncertainties in the prediction of COVID-19, with the results showing that the Bayesian inference is effective in detection accuracy.
      ● Three different deep learning models were proposed such as ResNet50, InceptionV3, and Inception-ResNetV2 to detect COVID-19 through X-ray images with the ResNet50.
      ● The ResNet50 model contains two tasks: classification between COVID/non-COVID and anomaly detection (allows for optimization of the COVId-19 score that is used for classification.
      ● A deep convolutional neural network based model was proposed in order to detect COVID-19 cases through the use of x-ray images with its accuracy being 83.5%.
      ● Studies using X-ray images that classifies between COVID-19, pneumonia, and healthy subjects only have 70 images available for use, causing insufficiency when evaluating the “robustness of the methods”.

      CT-Based Screening and Severity Assessment of COVID-19
      ● There are 4 stages of dynamic radiological patterns: early stage, progressive stage, peak stage, and the absorption stage.
      ● There are many studies that separate COVID-19 patients from non-COVID-19 patients, also, with the help of artificial intelligence the reading time of radiologists was reduced by 65%.
      ● Deep learning methods were proposed to be employed such as different segmentation models such as a UNet++, and ResNet50 models for diagnosis with CT images of hundreds of subjects that do and do not have COVID-19, with different models having different sensitivity and specificity.
      ● 2D models for segmentation for the lung can also be used by utilizing thousands of chest CT images with the models having high sensitivity and specificity.
      ● Common pneumonia such as viral pneumonia and COVID-19 both contain similar radiological appearance, it would be beneficial to be able to differentiate between the two when screening.
      ● A 2D CNN and a V-Net model can be used on manually delineated region patches for classification between COVID-19 and viral pneumonia.
      ● 2D slices are used in DeepPneumonia, a deep learning based CT diagnosis system and are paired with a pretrained ResNet-50, and a Feature Pyramid Network, this model can be used for pneumonia classification.
      ● A modified random forest can be used for diagnosis along with a 3D VB-Net to segment the image, as chest CT images are used.
      ● With severity assessment being important in treatment planning, a proposal of the use of an RF-based model while adopting a VB-Net for severity assessment is used.
      ● With many studies proposing CT-based COVID-19 diagnosis show promising results, it is important for early detection and predictions of severity.

      AI In Follow-Up Studies

      ● After clinical treatment, it is important for a follow-up in treating COVID-19, however it is challenging for artificial intelligence to be used in a procedure regarding the incubation period and infectivity.
      ● There is limited work that studies the follow-up of COVID-19 as there are some that have attempted to use machine learning-based methods, visualization techniques, constructive models, and follow-up functions in software platforms.
      ● Techniques and work regarding segmentation, diagnosis, quantification, and assessment may be helpful in developing a follow-up study that uses artificial intelligence.

      Public Imaging Datasets For COVID-19
      ● X-rays and CT scans are not often available for COVID-19 applications which slows down any artificial intelligence methods from continually being researched and developed.
      ● Medical images such as CT slices/images from websites and publications can be found on the COVID-19 Image Data Collection, the Coronacases Initiative, and the COVID-19 CT segmentation dataset, however quality is not sufficient for artificial intelligence algorithms to be trained and tested.

      Discussion and Future Work
      ● There have been attempts where artificial intelligence is applied to image-based diagnosis, and there are many more to come in the future.
      ● Artificial Intelligence-empowered image acquisition workflows allow scanning to be efficient and more effective in providing protection for the medical staff from being infected with COVID-19, there would be more artificial intelligence applications that would be used for image acquisitions with more benefits.
      ● In order to improve results and make them clinically useful, the quality and quantity of data needs to be improved, also the interpretability of image segmentations and diagnosis needs to be improved as well.
      ● Imaging data in COVID-19 applications have negative aspects as they may be incomplete, inexact, inaccurate,expensive, and time-consuming as it makes it difficult for training segmentation and a diagnostic network.
      ● Follow-up is important when diagnosing COVID-19 and treatment and can benefit from machine-learning based methodology, follow-up in and out of the hospital that can help track COVID-19 patients, and multidisciplinary integration.

      Conclusion
      ● Artificial intelligence allows for a safer, accurate, and efficient imaging solutions within COVID-19 applications with x-rays and CT scans are used to display the effectiveness of medical imaging with artificial intelligence.
      ● Artificial intelligence can have the ability to fuse imaging data, clinical manifestations, and laboratory results in order to allow for an improved screening, detection, diagnosis, analysis, and follow-up for COVID-19 .

      F. Shi et al., “Review of Artificial Intelligence Techniques in Imaging Data Acquisition, Segmentation, and Diagnosis for COVID-19,” in IEEE Reviews in Biomedical Engineering, vol. 14, pp. 4-15, 2021, doi: 10.1109/RBME.2020.2987975.

  18. Reverse Outline for: Cloud Computing in Amazon and Microsoft Azure platforms: performance and service comparison

    I. Introduction
    P1. Describes what is cloud computing is and how it applies to the current IT environment model

    P2. Cloud computing provides shared resources to multiple users that applies to small and medium enterprises (SME)

    P3. Cloud computing works in a limited legal jurisdiction which faces issues with data protection and confidentiality.

    P4. Cloud is the successor to virtualization as it allows more secure and reliable environment using the VM’s isolation. The user does not need to be aware of the HW as using the clouds web based interface to operate the cloud environment.

    P5. Datacenter up front capital costs are a disadvantage compared to scalable cloud computing costs
    II. Related Works

    P6. QoS is implemented to balance out the work and network load

    P7. Isolation in data centers consume a lot of energy
    III. The Architecture and categories of cloud computing services

    P8. Front end: user controllable infrastructure and specifications to access the clou. Back end: Cloud provider infrastructure. Central server is for system management. Hardware layer is responsible for management of physical cloud, servers, routers, power and cooling

    P9. Virtualization layer is applied over the physical layer to provide dynamic resources.

    P10. IaaS: Infrastructure as a service – a service that supports operations, storage, hardware, servers and networks.

    P12. Paas: Platform as a Service – service model that allows a user to rent VM servers.

    P13. Saas: Software as a Service – SaaS represents a software distribution model that relies
    on the applications hosted by service provider and their
    availability and global accessibility to the user via specific
    network or Internet.

    IV Test Platforms
    P14. Microsoft Azure Microsoft Azure is a flexible cloud platform that allows
    fast development, debugging and iteration of the
    applications, as well as their further management through
    a network of Microsoft data centers[8-9].

    P15. To subscribe to the Azure, it is necessary to use some of
    the Microsoft Live accounts (Live, Hotmail, Outlook) and
    credit card.

    P16. Amazon Web Services – AWS
    Amazon Web Services is a CC platform offered by
    Amazon.The main features of the service include: low
    price, high speed, scalability, openness, adaptability, and
    guaranteed security.

    P17. EC2 and S3 are two most used options of this CC
    platform [10].EC2 (Elastic CC) is a central part of
    Amazon’s platform.

    P18. Amazon provides online services to other web sites or
    client applications, thus most of these services are not
    available to end users, but instead allow the developers the
    use and advantage of Amazon platform functionalities
    while developing their own applications.

    P19. Through the web interface user launches its VM
    instance, using the so-called Amazon Machine Image
    (AMI), a predefined template with the installation of the
    operating system.

    P20. For the purposes of this paper, it will be created
    t1.micro instance of VM based on Ubuntu 14.04 LTS
    distribution of Linux (Table 1).

    P22. V. MICROSOFT AZURE AND AWS PLATFORM SERVICES
    Azure and AWS are offering top public cloud solutions,
    but when comparing the strongest instances that are on
    offer, it can be seen that Amazon offers a lot more for a
    certain amount of money.

    P25. Phoronix Test Suite3 was used for testing, as it allows
    testing of Linux platforms and system performance in
    given conditions.

    P26. MS Azure brings better test results. This could be an
    important fact for users that intend to use VM as a web or
    similar server with large number of requests per time unit.

    P27. VI. CONCLUSION
    MS Azure have easy and intuitive user interface for
    managing virtual resources, but without possibility for
    specific VM adjustments. On the other hand, AWS offers
    more features for system fine tuning and gives more
    options oriented to managing Linux virtual machines.

Leave a Reply