Nakeita Clarke’s Instruction Manual on How to Use the UX Process to Develop A Mobile Application Idea

This manual was created with the intention to help amateur mobile application creators and potential mobile application developers develop their ideas through research, planning, testing and revision.

Albert Chan’s Expanded Definition of Machine Learning

Introduction

The purpose of this 750-1000-Word Expanded Definition is to explore the definition of the term “machine learning” with regards to the scientific community and society. I will be analyzing the term in a study on fairness, education, and machine translation. My working definition will be provided afterwards.

Definitions

In the article “A Snapshot of the Frontiers of Fairness in Machine Learning” by Alexandra Chouldechova and Aaron Roth, the definition of machine learning is straightforward. “Machine learning is no longer just the engine behind ad placements and spam filters; it is now used to filter loan applicants, deploy police officers, and inform bail and parole decisions, among other things.” (Chouldechova, Roth, 2020, p.82). To Chouldechova and Roth, machine learning is a process that has evolved to automate more complex data.

On the other hand, the New York Times article “The Machines Are Learning, and So Are the Students” by Craig S. Smith defines the term differently. “Machine-learning-powered systems not only track students’ progress, spot weaknesses and deliver content according to their needs, but will soon incorporate humanlike interfaces that students will be able to converse with as they would a teacher.”(Smith, 2019). According to this, it can be seen that Smith defines machine learning as a means to an end, and this end being helping students learn better.

As for the article “On the features of translationese” by Vered Volansky, Noam Ordan, and Shuly Wintner, machine learning is a simple one. “In supervised machine-learning, a classifier is trained on labeled examples the classification of which is known a priori. The current task is a binary one, namely there are only two classes: O and T.”(Volansky, Ordan, Wintner, 2015, p. 103). To Volansky et al., machine learning is an assisting tool to help create more humanlike translation and must be supervised in order to function correctly.

Context

The context of all three articles is quite simple. The quotes I have used above are the most relevant to the topic of choice, as well as definition since machine learning wasn’t clearly defined in each article. So I will bounce off of that.

In the first article, the context used is a scholarly article searching into how machine learning can be made “fair”, or better put, “objective”. “With a few exceptions, the vast majority of work to date on fairness in machine learning has focused on the task of batch classification.”(Chouldechova, Roth, 2020, p.84). For better or for worse, the quote tells us that fairness has typically through batch classification. Batch classification, in this context, is sorting data by inputting user-defined characteristics and then judged through user-defined fairness. Machine learning is just the process of automating this process and even “learning” how to do it with other types of data. But the fallacy of fairness with such a method is laughable since humans are the ones defining fairness. Since humans have inherent bias, fairness is difficult to judge.

In the second article, it is a news article speaking about technology in education, specifically, machine learning and how beneficial it is for teachers. “The system also gathers data over time that allows teachers to see where a class is having trouble or compare one class’s performance with another.”(Smith, 2019). For teachers, this system is a way to track a student’s progress or performance without having to personally analyze the sheet data.

For the last article, the context is the machine translation. What should come to mind when hearing the term machine translation should be famous web browser-based translation services such as Google Translate, Niutrans, Sougou, and DeepL. That’s about it.

Working Definition

Personally, I am majoring in Computer Systems: IT Operations track. However, I have a hobby in translation with the assistance of machine translation. So, my working definition for machine learning is “the application of gathering vast amounts of data, categorizing the data, sorting them out, and analyzing data to find out the psyche of people.” For example, if given a group of 100, the data collected must be categorized by their gender or whatever category is set. Then, the answers gathered will be sorted by correct/incorrect based on the generally accepted answer. Finally, the data is analyzed so that there are percentages of what questions were answered correctly most of the time based on the sorted category. With that, the machine has a sample of what to expect if someone of x category answers the same data collection set. Done on a macro-scale, the machine will be able to predict what a population’s answer could be.

References

Smith, C. S. (2019, Dec. 18). The Machines Are Learning, and So Are the Students. New York Times. https://www.nytimes.com/2019/12/18/education/artificial-intelligence-tutors-teachers.html

Volansky V., Ordan N., Wintner S. (2015). On the features of translationese. Digital Scholarship in the Humanities, 30(1), 98–118. https://doi.org/10.1093/llc/fqt031

Chouldechova, A., Roth, A. (2020). A Snapshot of the Frontiers of Fairness in Machine Learning: A group of industry, academic, and government experts convene in Philadelphia to explore the roots of algorithmic bias. Communications of the ACM, 63(5), 82–89. https://doi.org/10.1145/3376898

Nargis Anny- Professor Jason Ellis- New York City College of Technology – ENG2575 OL70-10/13/2020 …Cyber Security Cyber Security is the process of protecting various technological programs, systems and networks from viruses and other digital issues. The viruses are often set up by anonymous people who are looking to gain user information, disrupt the hardware setup as well as delete data. Now as technology develops in today’s age, so do the viruses and the requests for safer online security. Cyber security started out in the 1970’s. Bob Thomas, a computer researcher associated with ARPANET (Advanced Research Project Agency Network), invented a program called “CREEPER”. The “CREEPER” program would start on a network and crossover from system to system. And it would leave a trace behind in the form of a message, “I’M THE CREEPER: CATCH ME IF YOU CAN”. The “CREEPER “eventually came to an end thanks to Ray Tomlinson. Tomlinson, who invented the email decided to evolve the CREEPER and create an equivalent program called the “REAPER”. This program managed to follow the CREEPER’s trail and delete it permanently, making the “REAPER” the first antivirus program to be created. Eventually, Thompson and Tomlinson’s creations led to various software and network companies to realize that there were numerous bugs in their systems that could be tampered with. This became more serious when organizations had computers and phone lines operating together to create networks of their own. And thus, anonymous people could gain access to their information. Spanning across from the 1980s to the 00’s, the internet began to experience more popularity around the world as technology began improving rapidly. Cyber hackers became more prevalent as computer viruses improved and couldn’t be monitored. Inventor Robert Morris created the “Morris Worm” in 1988, a program that multiplied beyond networks, foisted computer bugs and replicated it to indentify the spots in the system. However, while this worked it caused internet service to slow down and damage networks heavily. In the 1990’s, firewalls and anti-virus programs were used to help protect public user information. As we reached the 2000’s, we see more criminal hackers being taken down with longer jail time and heavier fines for their actions. However, now hackers were able to create virus programs that not only could hit users in cities, but people across various parts of the world.However, while Cybersecurtiy does help, there are some setbacks. Security software often slows down computers and their network. A lot of people who use it are bound to have their personal data exposed to tons of people, who can use it for any reason. Technology users have been introduced to numerous cyber security threats such as Malware, Ransomware, Phishing and Social Engineering. With Malware, this software can tamper with user files through various codes and damage to data and network systems. Ransomware also tampers with user files, but requests a payment to gain to get back those files. Phishing is known for having various scam emails be sent to users under the guise of a legitimate source and steal information (address, card information, phone number, login information) once someone opens it. Social engineering is when someone manages to gain user information in person and use it for their own purposes. An example of this can be shown with credit card scammers. These people are known to ask associated for their card information to buy various goods, such as clothes, jewelry, cars or even houses and instead that person’s information and money is stolen. Even with the millions of dollars that go towards new security programs, there will always be something out there that tops it. In today’s time, technology researchers are looking towards using methods that would identify online users tech patterns, and prevent the threats from getting to them in the first place. To conclude, Cyber Security is something that will progress over time, and so will the viruses that can harm it. Despite this being an unfortunate reality, the best thing to do is to always be on top of any computer virus that is created. That’s all we can do as tech users. As we see Cyber Security, increase, we can hope for a program that wipes out any virus instantly and keeps the computer functioning at 100 percent.

Sources:

I. Margaret Rouse
What Is Cybersecurity? Everything You Need To Know
https://searchsecurity.techtarget.com/definition/cybersecurity?amp=1
April 03, 2020

II. Dakota Murphey
A History of Information Security – IFSEC Global: Security and Fire News and Resources
https://www.ifsecglobal.com/cyber-security/a-history-of-information-security/
June, 27, 2019
III. defined, explained, and explored
IV. What is Cyber Security?
https://www.forcepoint.com/cyber-edu/cybersecurity

TO: Professor Jason Ellis

FROM: Nargis Anny

DATE: September 22, 2020

SUBJECT: 500-word summary

This is a 500 word summary of “A Smart Agent design for Cyber Security based on HoneyPot and Machine Learning”. The article highlights the rise of security risks that come with the rise of social media and the World Wide Web. We’re also introduced to the programs that keep the security programs running, as well as the setbacks it’s brings to computer systems worldwide.

In the article, GDATA states how every year there are over millions of Cyber attacks that have been discovered. These issues are often involves analysis tools that keep track information. However, the difficulty is keeping an eye on every problem that arises. With a better understanding of how Cyber attacks work, there’s a better chance of preventing future issues. HoneyPots is one of the most prominent cyber security programs to date. Developed in 1992, HoneyPots is utilized as a monitoring and detecting system that locates harmful malware. Now future attacks can be prevented before they even find a system to disrupt. Part Two talks about Anomilies, data which has to be protected from harmful versions of software. With Social Media sites such as Myspace or Facebook, these sites need to be observed in order for a social ‘Honeypot”, to detect harmful profiles, as well as any other threats out there. Authors suggest a linkage defense system, which can bypass the setbacks brought on by past tools that tried to work. The Linkage system has the Honeypot’s and the defense system coexist together by having their management and communication tools work together. This system is based on a SMNP model code used in network management. Now Future intruders will be blocked by firewalls, if they try to hack into the system. Machine Learning is where we learn that computers operate under the system program that it’s been assigned. The concept of “Machine Learning”, keeps the computers adjusted to data structure and how to operate properly. Machine Learning has training models that separate into two phases in order to function. The first phase is estimating the data through training, by demonstrating tasks like recognizing animals in images or speech translation. The second phase is production. Here we see new data pass through the system in order to get the computer to complete an objective. The K-Means algorithm helps maintain clustering from certain systems. Eddabbah indicates that the “K –Algorithim is a faster solution to the issue it still has major setbacks” (Eddabbah, 2020, Page 3). The Decision tree helps branch out all data structures in case of testing. Part 4 jumps back into HoneyPot, this explains the different security communication networks. The first part is HoneyPot deployment which can monitor either Internal or External attacks on the system. With this we can see attacks that are carried out or attempted on any network. With DMZ’s (Demilitarized zones), HoneyPot function as a way to provide public internet service away from the computer’s internal network. Next, we have networks like KFSensor, Netfacade, Specter and CurrPorts. KFSensor is a server that watches out for connections with the network. Netfacade allows numerous network hosts interactions through unused IP a dresses. Networks also have to direct security threats to the firewall and eventually the honeypot will separate it to see if it’s serious or not. To conclude, network security is a very serious problem due to constant evolving and threats are hard to manage.

References:

Kamel, N / Eddabbah, M / Lmoumen, Y/ Touahni, R “A Smart Agent Design for Cyber Security Based on Honeypot and Machine Learning”, Security & Communication Networks, (2020) ID 8865474 (9 Pages), 2020

Stephan Dominique’s Expanded Definition of Biometrics

TO: Prof. Jason Ellis

FROM: Stephan Dominique

DATE: 10/29/20

SUBJECT: Expanded Definition of Biometrics

Introduction

The purpose of this document is to expand the definition of the term ‘Biometrics’, which is very popular in today’s advancing technological world. If you use a smartphone today that uses any fingerprint or face scanning technology to unlock your phone, biometrics is being used. I will cover this topic by first defining the term and the history behind it, followed by the context of the word as well as the working definition. 

Definitions

According to Meng-Hsuan Fu, Biometrics is defined as “Using human physical characteristics including finger vein, iris, voice, and facial features for recognition.” (Fu, M., 2020, p.2). This means, for example, if a crime were to be committed and the police found the fingerprints of the criminal to later identify the person, biometrics is being used in this instance. To understand biometrics, one must break everything down by first looking at the term “anthropometry”, which is the study of the specific measurements of the human body. Biometrics stems from this as without Anthropometry, biometrics simply does not exist. Anthropometry involves analyzing the unique properties of humans that make each person different from the next. Going even further, the founder of this study is Alphonse Bertillon, who also was the first person to identify a criminal through their fingerprints as well as being the inventor of what is known now as the mugshot, another form of biometrics. “Biometrics are physical or behavioral human characteristics that can be used to digitally identify a person to grant access to systems, devices or data.” (Korolov, M., 2020). Essentially, biometrics means that no one else can have access to what you have access to. If there is a password required for something, your body is the key to unlock it and only you can unlock it. These two definitions are similar for the fact that they both discuss that biometrics involves using the human body to identify a particular person. The difference in these two definitions, however, are that Fu speaks on biometrics in a general sense. The second author’s definition goes more into biometric security, applying the first definition into security, which is a form of security which is hard to crack.

Context

The first contextual appearance is where Fu states that “Biometrics is becoming more widely used in access-control systems for homes, offices, buildings, government facilities, and libraries. For these systems, the fingerprint is one of the most commonly used biometrics. Users place their finger on a read device, usually a touch panel. This method ensures a unique identity, is easy to use and widely accepted, boasts a high scan speed, and is difficult to falsify. However, its effectiveness is influenced by the age of the user and the presence of moisture, wounds, dust, or particles on the finger, in addition to the concern for hygiene because of the use of touch devices”  (Fu, M., 2020, p.5). In this quote, Fu describes the use of biometrics and where it is popularly used with the benefits of the term as the technology is being utilized more and more in typical workplaces because of its ease and efficiency but also the cons of it as well. The second contextual appearance is where Korolov mentions “62 percent of companies are already using biometric authentication, and another 24 percent plan to deploy it within the next two years.” (Korolov, M., 2020). She is essentially saying that because biometrics are highly effective, companies are quickly becoming on board as their information is tightly guarded.

Working Definition

Biometrics is extremely popular and as such, will require workers in the I.T. field to aid in installing, maintaining, and fixing such technologies. Biometrics is relevant to my career because I plan to start off on the technical side which includes being a field service technician. This job consists of going through workplaces and maintaining various things such as a fingerprint scanner. The knowledge of handling such equipment will probably become more mandatory as time goes on and more companies switch to biometric security. 

References

Meng-Hsuan Fu. (2020). Integrated Technologies of Blockchain and Biometrics Based on Wireless Sensor Network for Library Management. Information Technology & Libraries, 39(3), 1–13. https://doi-org.citytech.ezproxy.cuny.edu/10.6017/ital.v39i3.11883

Korolov, Maria. “What Is Biometrics? 10 Physical and Behavioral Identifiers.” CSO Online, CSO, 12 Feb. 2019, www.csoonline.com/article/3339565/what-is-biometrics-and-why-collecting-biometric-data-is-risky.html.

Summary of “Addressing cloud computing security issues”

To : Professor Ellis

From :David Requena

Date: Sept 25, 2020

Subject: 500-Word Summary

My 500-word summary is based on the article “Addressing cloud computing security issues” by Zissis, D., & Lekkas, D. This articles tell us about how cloud is growing at very fast rate. It also tell us about how important it is to find measures to be fight against this new problems that we are currently facing.

` Although the innovation of Cloud Computing has changed many technologies, it also arises new issues with computing, security and several other aspects. As with every technological invention, new security measures must be taken as we further our technological knowledge. In today’s world, there are already security measures when it comes to dealing with the possible threats to Cloud Computing. However, traditional and functional security is constantly being depreciated. The following methods are the ones that are currently considered as solutions for risks towards cloud security – trust in the third parties, security identification of the threats, and better security using cryptography.

Cloud Services:

There are three main types of cloud services, each with a different function or purpose, and one common. The three models are Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). IaaS basically gives the consumer the thing it needs to run, allowing the consumer to deploy and run the software. This includes storage, network, and computer resources. PaaS gives the user the ability to deploy in the cloud infrastructure. This service is usually provided by a third party. PaaS is mainly used to develop software on its own infrastructure. SaaS allows a third party to provide and host the software for their customer’s use over the internet.

Trust is a major factor in any type of cloud-related technology. This is because it’s a globalized service, many people in various countries interact with it. Third party companies are the ones that provide the different types of cloud services to its consumers. They are the ones that overview from security to privacy. According to the article ‘Addressing Cloud Computing Security Issues’, “Third parties are trusted within a cloud environment by enabling trust and using cryptography to ensure the confidentiality, integrity and authenticity of data and communications while attempting to address specific security vulnerabilities.” This simply means that it is possible to trust the third parties if they are willing to commit to help and secure the servers by being private and encrypting it so it could be harder to break into, even if someone tries. The article also states that the purpose of cloud computing is to have “the ability to clearly identify, authenticate, authorize and monitor who or what is accessing the assets of an organization is essential to protecting an IS from threats and vulnerabilities.” Being able to have trust in a company is a difficult action for another company because it is harder to verify ever action if it’s not being watched and constantly modified. Therefore, people have a hard time outsourcing what’s needed to be done and those that are to be done within the company. The way to trust a company is to have some sort of barrier or filter when it comes to the information, you’re sharing with your partner company. “Separation is the key ingredient of any secure system and is based on the ability to create boundaries between entities that must be protected, and those which cannot be trusted.” This is a great solution for any company, if both the third party and the company commit.

There are many threats in Cloud Computing, but first they need to be identified. Cloud Computing is a fairly new technology that traditional securities have already countered, but because it’s a new technology, it requires a different approach to security. First, threats must need to be identified, which may take some time because there are several areas like “availability and reliability issues, data integrity, recovery, privacy and auditing” to consider. The ability to identify the vulnerabilities are very complicated because there are building blocks to be used in designing secure systems. These important aspects of security apply to three broad categories of assets which are necessary to be secured – data, software, and hardware resources. Building blocks are in basic systems that can be reused to protect and deploy faster solutions. This is to ensure that it is developed and deployed in the areas that are having security problems. The reason that they work like this is so they can target different areas at the same time. For example, if a cloud is having a problem that include data being lost and a data breach, a building block can help solve these problems if it was developed in that specific way.

The third way to make cloud environment more secure is by having implement cryptography. Many times the way the hacker are able to trust pass the security by finding outdated security measures and. According to the article, the best way to secure is by the, “use of a combination of Public Key Cryptography, Single-Sign-On technology 
 to securely identify and authenticate implicated entities.” . A public key cryptography is a modern cryptographic method of communicating safely without having agreeing in a secret key. This is a method that uses private key and a public key using algorithm to secure the data. The way this work for example is the sender uses the receiver’s public key to encrypt the message. This way they only way to decrypt it is by using the receiver’s private key. The Single-Sign-On technology(SSO) is to have users only have one password to access many applications not having to have multiple credentials. One example for this is google services, when you have a google account you are instantly granted many services like google drive, google photos, etc. The way to access all these services is just by logging in one time and you will have access to everything thanks to SSO. These two different ways to make logging in and transferring data more safe for everyone who is involved

Zissis, D., & Lekkas, D. (2010, December 22). Addressing cloud computing security issues. from https://www.sciencedirect.com/science/article/pii/S0167739X10002554