Week 2, Weekly Writing Assignment

As discussed in this week’s lecture, the weekly writing assignment for Week 2 is connected to the 500-Word Summary project. The idea is to create a reverse outline of the article that you select to summarize, because this work will help you better understand each part of the article and it will generate text that you can utilize in the first draft of your 500-Word Summary.

After watching the lecture and making notes, do these things to begin the 500-Word Summary project and accomplish this week’s writing assignment:

  • Go to the City Tech Library’s website, navigate to Find Articles, click on the corresponding letters to find the suggested databases for your research (Academic Search Complete, Applied Science and Tech Source, Science Direct, Springerlink ejournals, Wiley Online, or IEEE Xplore).
  • Using one or more of the databases, find an article that is at least 5 pages or longer that is on a subject relevant to your studies and interesting to you.
  • Using the database’s citation tool or the Purdue OWL’s APA guide to references for periodicals, write a bibliographic reference for the article that you selected in your word processor of choice.
  • Then, read the article’s abstract and keep it’s overview in your mind.
  • Next, read the first paragraph of the body of the article (this follows immediately after the abstract–if there is one, otherwise, it is the first paragraph of text after the article title).
  • Put the article aside and pull up your word processing document with the bibliographic entry.
  • Above the bibliographic entry, type “1.” followed by a one sentence summary of that paragraph’s main point, terms, idea, etc. in your own words. It is important to keep the article out of view when you do this. Don’t quote anything. Put it all in your own words what you remember about that paragraph.
  • Return to the article and read the second paragraph. Put away the article and return to your word processor document. Below your sentence for the first paragraph and above the bibliographic entry, type “2.” followed by your one sentence summary of that paragraph. Continue in this way for all of the other paragraphs in the article, numbering each.
  • Finally, add a memo header to the top of your word processor document, TO, FROM, DATE, SUBJECT (refer to lecture for what you should write).
  • Copy and paste your completed reverse outline document into a comment added to this post. If you haven’t done this before, click on the title of this post and then scroll down to the bottom of the page to find the comment box. Make sure that you are logged into OpenLab. Copy-and-paste your reverse outline (memo heading, paragraph summaries, and bibliographic entry) into the comment box and then click “Post Comment.” Wait a moment to see your comment post on the site.
  • Post your reverse outline by the beginning of next week of class, Wednesday, 9/9.
  • Return to your word processor and save a new version of your reverse outline, which you can use as the beginnings of your 500-word summary first draft. Saving a new version of this file will allow you to return to your earlier work (think versioning) in case it is needed. In your new file, take out the numbers next to each paragraph summary and edit the document so that the summary sentences form one long or several shorter paragraphs. You may begin editing this into your first draft of the project, and we will discuss what we will do with this first draft in next week’s lecture.

37 thoughts on “Week 2, Weekly Writing Assignment”

  1. TO: Professor Ellis

    FROM: Michael Lin

    DATE: September 9, 2020

    SUBJECT: 500-word summary reverse outline

    This is a 500-word summary of “Protecting Against Thermal Effect: Part 1: Types of Electric Arc. Professional Safety,” the article about how to protect the electrician from different type of electric arc’s thermal hazard, it also talks about the important of Personal protective equipment for the electrician and the details related to types of arc including open air arc, arc in a box, and moving arc are also presented.

    1. The industry that produce PPE has exist over 20 years, but the gap between the knowledge about the electric arc and the standardization still exists, this article will talk about how to protect electrician from the electric arc’s thermal hazard.

    2. Part 1 talks about the different type of electric arc. Talks about their behavior and methods of thermal energy dissipation

    3. Part 2 talks about how statistical are used for future improvement. But information on electric arc incident is hard to find in government statistical review

    4. Part 3 is talking about different way protecting from electric arc.

    5. During the past 15 year, the availability of different fabric and other material used in PPE help to protect electrical worker from electric arc.

    6. The most important part of any progress is studying and analysis experience, so understanding the electric arc incident data will help us improve.

    7. The range that the heat generated by the electric arc are very wide, so some use of PPE alone will not provide absolute protection, and there are many factors can affect the amount of thermal energy.

    8. Several organizations are involved in standards development and maintenance related to the electric arc safely and PPE

    9.What is electric arc, some states that eclectic arc is a discharge of electricity from voltage, etc.

    10. Not all electric arcs in electrical equipment are the same, there are five different type of electric arcs and it is classification is based on several differentiating factors.

    11. Open air electric arc is median or high voltage that burn in open air without any thing that cover the arc.

    12. Second type of electric are Arc in a box, and it is a low-voltage electric arc in an enclosure.

    13.The third type of electric are Moving arc, is a medium or high-voltage arc in open air, and it is between two parallel conductors.

    14.The fourth type of electric are Ejected Arc, ejected arc is a medium- or high-voltage arc formed at the tips of parallel conductors or electrodes.

    15.The last type of electric arc is Tracking arc, Tracking arc is very different from the other electric arc, it can happen on a person’s skin under their cloth when they have a direct or indirect contact with the energized part.

    16.Knowing the different type of electric are very important for electrician, and to create a safe environment for those who work in that environment.

    References

    Golovkov, M., Schau, H., & Burdge, G. (2017). Protecting Against Thermal Effect: Part 1: Types of Electric Arc. Professional Safety, 62(7), 49–54.

  2. TO: Professor Ellis
    FROM: Joshua Patterson
    DATE: 9/7/2020
    SUBJECT: 500-word summary reverse outline

    My 500-word summary is about the article titled “Emotional Training and Modification of Disruptive Behaviors through Computer-Game-Based Music Therapy in Secondary Education.”

    1. This article’s abstract tells us about how music is important to a person’s development and behavior, and also how researchers in 1999 have used music therapy to come up with a solution to help students with their disruptive behavior by using video game music from the game Musichao.

    2. The first section (1) projects the idea that music has the ability to not only to help a person with their behavior, but can also affect other characteristics such as creativity, motivation and another resource for those visual or auditory learners as well.

    3. The second section (1.1) defines the term “Music Therapy” and explains the effects it can have on different parts of children’s and adolescents’ life.

    4. The third section (1.2) explains the five stages of adolescence, adolescents being exposed to games to examine the effects that it has on their intelligence, education and daily life.

    5. The fourth section (2) tells about how they conduct the tests and what their role is as the examiners.

    6. The fifth section (2.1) gives us a brief overview of six participants within a fixed age range to examine how they act within a classroom setting.

    7. The sixth section (2.2) explains how two groups of instruments were used for this test that already had existed.

    8. The seventh section (2.2.1) explains the book that was used as the first instrument and extensive tests in the book that are going to be ran to examine the characteristics of each adolescence participant.

    9. The eighth section (2.2.2) explains the game that was used as the second instrument, how the game works, and the daily results it provides the examiners.

    10. The ninth section (2.3) explains the procedure of how the tests are ran on each participant and who will be assigned to deliver each test.

    11. The tenth section (2.4) details the process of examining the test results from the given tests based on the results from the first test in the beginning to the last test at the end.
    12. The eleventh section (3) provides the actual results of the tests, the design of the tests, and the comparison of the tests from the beginning to last test.

    13. The twelfth to seventeenth section (3.1 – 3.6) explains each of the tests, the scoring system used to keep track of each participants’ progress throughout for each of the tests, what each score means for each participant, and their results on each test.

    14. The eighteenth section (4) gives us an overview of the meaning of the results from the first test to the last, the results showing improvement in the participants that were chosen for this study and the minor changes are making their hypothesis somewhat true but not enough that it can be said for certain that music does in fact help with all aspects of their intelligence.

    15. The final section (5) explains how their tests can be taken as proof that music therapy works, but that their study wasn’t perfect because of lack of particular cases that could not be used, such as having a control group for this study and not having randomization in the study as well.

    Bibliography

    Chao-FernĂĄndez, R., Gisbert-Caudeli, V., & VĂĄzquez-SĂĄnchez, R. (2020). Emotional Training and Modification of Disruptive Behaviors through Computer-Game-Based Music Therapy in Secondary Education. Applied Sciences (2076-3417), 10(5), 1796. https://doi-org.citytech.ezproxy.cuny.edu/10.3390/app10051796

    1. My youngest brother play video games often and he mostly likes music based games. I feel like it is kind of refreshment for them. Your writing is pretty good I liked the article actually.

      Please read mine if you have time. Thank you.
      Best wishes
      Anny

  3. To: Prof. Ellis
    From: Shital B K
    Date: 09/07/2020
    Subject: Modern Web-Development using ReactJS

    ReactJS is a JavaScript library used to design and develop the user interfaces that simplifies the data rendering and binding of large web-based applications. It is mostly used to design and develop modular user interfaces which simplifies the development of complex web-based applications. It follows the MVC (Model-View-Controller) architecture where it is used as the View. It also supports rendering on server-side application using NodeJS and the rendering process in the mobile devices are supported by React Native. ReactJS has simplified the data binding and rendering process and proven to be much easier for the front-end development.
    Some of the features of ReactJS are Lightweight DOM for Better Performance, Easy Learning Curve, JSX, Performance, One-way data flow and Virtual DOM. ReactJS has the feature to interact with the document object model stored in the memory that doesn’t interact with the browser directly and hence provides excellent performance of the application. It is known as one of the easiest frameworks having less complexity to learn which makes it popular among the web developers. JSX is another language that simplifies React binding events. The reason that the framework is highly efficient in performance because it has the feature called virtual DOM.
    The main working principle of React is based on MVC and DOM where MVC is popular for the user interface development and DOM represents the view of the applications. React performs all its task with the use of Component. React cannot perform any task without the use of component and hence known as the most useful tool for building blocks. The components are generally designed as tree structure that makes the code easy and reusable. The lifecycle of ReactJS framework is generally completed in three states. The first process is the mounting process and after the mounting process, the DOM is generated and finally the last process is the conversion of the DOM placement into the container node. The property set called as props and state are used to manipulate the Component. This property enables to create the user interface of the web application using Component.
    React being an excellent framework has some limitations. Some of the limitations of React are having only the view entity in the MVC where additional tools should be used to implement other tasks. Inline templates and JSX are complex to use that can be tire some while designing large applications. The issue during the compilation is also one of the limitations using ReactJS compared to other frameworks.
    Modern web development has become very dynamic and interactive. As a result, there are numbers of frameworks used in the industry. ReactJS is one of the most popular frameworks having lots of features that simplifies the data rendering and binding. Hence, it is widely used these days for web development and frontend development.

    References
    Aggarwal, S. (2018). Modern Web-Development using ReactJS. International Journal of Recent Research Aspects, 5(1), 133–137

  4. TO: Professor Jason Ellis

    FROM: Nargis Anny

    DATE: September 7, 2020

    SUBJECT: 500-word summary reverse outline

    This is a 500 word summary of “A Smart Agent design for Cyber Security based on Honeypot and Machine Learning”. The article highlights the rise of security risks that come with the rise of social media and the World Wide Web. We’re also introduced to the programs that keep the security programs running, as well as the setbacks it’s brings to computer systems worldwide.

    1. In the article, GDATA states how every year there are over millions of Cyber attacks that have been discovered. These issues are often involves analysis tools that keep track information. However, the difficulty is keeping an eye on every problem that arises.

    2. With a better understanding of how Cyber attacks work, there’s a better chance of preventing future issues.

    3. HoneyPots is one of the most prominent cyber security programs to date.

    4. Developed in 1992, HoneyPots is utilized as a monitoring and detecting system that locates harmful malware. Now future attacks can be prevented before they even find a system to disrupt.

    5. Part Two talks about Anomilies, data which has to be protected from harmful versions of software.

    6. With Social Media sites such as Myspace or Facebook, these sites need to be observed in order for a social ‘Honeypot”, to detect harmful profiles, as well as any other threats out there.

    7. Authors suggest a linkage defense system, which can bypass the setbacks brought on by past tools that tried to work.

    8. The Linkage system has the Honeypot’s and the defense system coexist together by having their management and communication tools work together.

    9. This system is based on a SMNP model code used in network management.

    10. Now Future intruders will be blocked by firewalls, if they try to hack into the system.

    11. Machine Learning, in which we learn that computers operate under the system program that it’s been assigned. The concept of “Machine Learning”, keeps the computers adjusted to data structure and how to operate properly.

    12. Machine Learning has training models that separate into two phases in order to function.

    13. The first phase is estimating the data through training, by demonstrating tasks like recognizing animals in images or speech translation. The second phase is production. Here we see new data pass through the system in order to get the computer to complete an objective.

    14. The K-Means algorithm helps maintain clustering from certain systems.

    15. The Decision tree helps branch out all data structures in case of testing.

    16. Part 4 jumps back into HoneyPot, this explains the different security communication networks. The first part is HoneyPot deployment which can monitor either Internal or External attacks on the system. With this we can see attacks that are carried out or attempted on any network.
    17. With DMZ’s (Demilitarized zones), HoneyPot function as a way to provide public internet service away from the computer’s internal network.

    18. Next, we have networks like KFSensor, Netfacade, Specter and CurrPorts. KFSensor is a server that watches out for connections with the network. Netfacade allows numerous network hosts interactions through unused IP a dresses.

    19. Networks also have to direct security threats to the firewall and eventually the honeypot will separate it to see if it’s serious or not.
    20. To conclude, network security is a very serious problem due to constant evolving and threats are hard to manage. However, this manual offers are real life solution to this issue and they are looking to actually test it out within a real network setting.

    References:

    Kamel, N / Eddabbah, M / Lmoumen, Y/ Touahni, R “ A Smart Agent Design for Cyber Security Based on Honeypot and Machine Learning”, Security & Communication Networks, (2020) ID 8865474 (9 Pages), 2020

    1. Hi Anny, I remember you from project management class. Overall you perfectly followed the format professor asked us to and your writing is good too. Only thing I will add is the link of the article is missing in your reference try to fix that.

    2. Hi Anny,

      I enjoyed reading your summary, and also I’m a Cybersecurity major and never knew about Honeypot. Sounds interesting and worth researching further.

      Overall, your writing style is very informative and easy for those who aren’t necessarily CST majors can get an idea of what you are summarizing.

  5. TO: Professor Ellis
    FROM: Joshua Patterson
    DATE: 9/7/2020
    SUBJECT: 500-word summary reverse outline

    1.Software engineering at this time is very necessary, the preparation of skilled human resources is essential. Efforts that can be done is to develop effective learning methods, and adaptive learning is one of them.

    2.Adaptive learning technologies provide an environment that can intelligently adapt to the needs of individual learners through the presentation of appropriate information, comprehensible instructional materials, scaffolding, feedback, and recommendations based on participant characteristics and on specific situations or conditions.

    3.Adaptive learning can consist of several characteristics, namely: analytics, local, dispositional, macro, and micro.

    4.Student that have difficulty in making program algorithms can be solved.

    5.Teacher can guide students to learn programming by monitoring them.

    6.There are many adaptive learning models for programming learning.

    7. E-learning facilitated students’ psychomotor ability requires a capability that enables students to write program code directly into and evaluated by a particular module in the electronic learning.

    8. The adaptive learning concept to improve students psychomotor ability during online learning/teaching using commercial off-the-shelf LMS.

    9.The psychomotor interaction between student and LMS will be demonstrated by the use of adaptive learning in computer programming courses.

    10.The transactions processes that occur in the web API model started from LMS server.

    11.How Remote interpreter web API model Works.

    12.Research method Remote interpreter web API model.

    13.Web API Implementation.

    14.Web API model Performance Analysis.

    15. Conclusion and Future Works.

    References:

    Yuana, R. A., Leonardo, I. A., & Budiyanto, C. W. (2019). Remote interpreter API model for supporting computer programming adaptive learning. Telkomnika, 17(1), 153–160. https://doi.org/10.12928/TELKOMNIKA.v17i1.11585

    1. Hi
      I am actually from networking security I wanted to software engineering later on I changed my mind. I liked the way you explained it’s advantages.

      Please read my writings if you have time.

      Best wishes
      Anny

  6. To: Prof Ellis
    From: Tasnuba Anika
    Date: Sep 9, 2020
    Subject: Reverse outline

    1. Alan Turing founded modern computers and AI. Around 1980’s and 1990’s the interest in AI seemed to grow more which drastically seemed to increase more in 2016 among medical sectors.
    2. AI medicine is divided into two subtypes, virtual and physical
    3. The doctors need to do their research by asking a lot of questions then diagnosing the disease and then combining the symptoms and eventually entering large amount of data into machines
    4. By repetition of the algorithm it recognizes certain groups of symptoms or certain clinical/radiological images looks like.
    5. AI is already being used in for various purposes in the hospitals
    6. Radiology is the branch that has been the most upfront and welcoming to the use of new technology
    7. DXplain was developed by the university of Massachusetts in 1986, which is being used for different purposes in medical field and by medical students.
    8. Da Vinci robotic surgical system developed by Intuitive surgical, the robot arms mimic a surgeon’s hand especially urological and gynecological surgeries.
    9. There are different tracers present in the market now adays, a new feature which includes ECG tracing.
    10. IBM’s Watson Health, which will be equipped to efficiently identify symptoms of heart disease and cancer.
    11. AI to take their notes, analyze their discussions with patients, and enter required information directly into EHR systems.
    12. Using AI have reduced manual labor and frees up primary care physicians time while increasing productivity.
    13. It has been found that AI successfully performed classifying suspicious skin lesions as AI can learn more from successive cases.
    14. Though AI have reduced human pressures, but it also has reduced the number of job opportunities.
    15. A Digital Mammography DREAM Challenge was performed using AI which reviewed 640,000 digital mammograms. Even though it performed successfully but it has been feared that it will be replaced by the real doctors.
    16. AI will be very important part of medicine in the future, but people must learn how to efficiently use it for better productivity.
    17. It is important for primary care physician to learn completely how to use AI and understand to keep a balance between the usages of AI and real doctor practices.

    Reference
    Amisha, Malik, P., Pathania, M., & Rathaur, V. (2019). Overview of artificial intelligence in medicine. Journal of Family Medicine & Primary Care, 8(7), 2328–2331. https://doiorg.citytech.ezproxy.cuny.edu/10.4103/jfmpc.jfmpc_440_19

  7. TO: Professor Ellis
    FROM: Enmanuel Arias
    DATE: September 23, 2020
    SUBJECT: Reverse Outline of Article

    This is a reverse outline of the article “Explore Modern Responsive Web Design Techniques” written by Elena Parvanova. The article explores the history of web design and the techniques used to make responsive webpages possible.

    1. Most websites were created and managed by the IT departments of large companies. Nowadays, anyone with basic computer skills can create a website with the help of a content management system (CMS).
    2. Web design is a growing industry. The design of a company’s website can lead to its success or failure. Some aspects of a great design are layout, navigation, color scheme, and user experience are key to a great design.
    3. 27 years ago, Tim Berners-Lee created the first website, a hypertext project. The website contained left-aligned text with blue hyperlinks on a white background.
    4. What we know was web design started in 1993 with the introduction of images accompanied with text. The World Wide Web Consortium (W3C) was formed in 1994 and established Hypertext Markup Languages (HTML) as the standard for web design. HTML has limitations that JavaScript resolves.
    5. In the following year, Flash was used to design more elaborate websites, but it was not search-friendly. The combination of JavaScript and jQuery replaced Flash. Cascading Style Sheets (CSS) were also introduced the same year.
    6. CSS provides a structure for multiple webpages. CSS allowed websites to be created with a tableless design using percentages, fluid web design. Webpages no longer were dependent on the screen resolution and displayed the same across a variety of devices.
    7. With the increase of mobile devices with internet access, websites needed to be designed around the resolution of the device. The 960 grid system was introduced with a 12-column grid designed. Fluid 960-grid system replaced the fixed spacing with percentages.
    8. In 2010, Ethan Marcotte proposed that instead of creating separate layouts for mobile, the same content can be designed around the device’s screen size. This was called Responsive Web Design (RWD).
    9. RWD uses viewport meta tag, grid system, and media queries to determine the layout used for displaying content.
    10. RWD led to the creation of responsive frameworks like Bootstrap and Foundation. These frameworks standardized commonly used elements and offered fluid grid layouts that are resized proportional to their container within the CSS.
    11. Foundation is an open-source framework that provides a variety of tools needed to create a responsive website on a wide variety of devices. It uses SASS stylesheets to implement its components.
    12. Bootstrap is similar to Foundation. It is modular by design, so developers are allowed to select what components they want to use. Its components are implemented using LESS stylesheets.
    13. The increase use of frameworks inspired the creation of the CSS Flexbox and Grid layout.
    14. Modern web design focuses on the simple organization of elements, positioning of blocks and the order of content.
    15. Flexbox is a CSS model that is optimized for interface design and the positioning of layout items. The parent element will contain the child elements and “flex” accordingly to either fill unused space or shrink to prevent overflowing.
    16. Flex containers are positioned based on their alignment across two axes, the main and cross axis. Flexboxes were popularized because it allowed web designed to properly alignment elements for the first time. They could finally easily center a box.
    17. Flexboxes are not intended to layout the design of an entire webpage. The Grid Layout module is used for layout.
    18. The CSS Grid layout allows web developers the ability to create more elaborate responsive layouts. The Grid layout is not as supported as Flexboxes on many browsers.
    19. Modern web design uses a combination of flexboxes and grid layout. The grid layout is used to layout entire web pages, while flexboxes are used to design how a group of elements interact with each other. Together they allow for responsive layouts on a variety of resolutions.
    Parvanova, E. (2018). Explore Modern Responsive Web Design Techniques. Proceedings of the International Conference on Information Technologies, 43–48. Retrieved from http://infotech-bg.com/

  8. TO: Professor Jason W. Ellis

    FROM: Gladielle Z. Cifuentes

    DATE: September 9, 2020

    SUBJECT: 500-word summary reverse outline

    This is a 500-words summary of the article titled “Security Flaws in 802.11 Data Link Protocols” which discusses the vulnerabilities that a WLAN can experience by any person who can potentially eavesdrop through radio receivers due to weak security protocols.

    1.Wireless Equivalent Privacy (WEP) is the mechanism that the IEEE 802.11 standard uses as its standard for data confidentiality. In this article, it describes how WEP had flaws and didn’t produce security for WLANs. The study in this article will describe the flaws of WEP and how the researches went about on finding ways to improve the security or replace WEP.

    2. WEP has been found to have many vulnerabilities and reasons as to why it is not a trustworthy security protocol. Due to the fact that using WEP is optional, this causes a huge threat to security. This results in encryption of the data to never be used. Another defect of WEP would be the shared key standard it uses for all the devices. According to this article, the most serious security breach that WEP has is how attackers can use cryptanalysis to recover the encryption keys that the WEP uses on its devices. “Once the WEP key is discovered, all security is lost.”

    3. Because of the flaws of WEP, the conclusion is that this security protocol was poorly designed. Experienced security protocol designers and cryptographers are needed for the creation of such difficult security protocol designs.

    4. A short term solution to WEP is the creation of Temporal Key Integrity Protocol (TKIP). TKIP are sets of algorithms that “adapt the WEP protocol to address the known flaws while meeting these constraints”. Packet sequencing and Per-Packet key mixing are the functions that TKIP help with the security flaws of WEP for short term use purposes.

    5. A long term solution that researchers found for WEP security flaws is using the Counter-Mode-CBC-MAC Protocol. For the algorithm of this protocol, the Advanced Encryption system was used. This system contains many features that improve the operation of the WEP and its security capabilities which may include: single key usage, using integrity protection for packet header/packet payload, reducing latency by allowing precomputation, pipelining and more. In order to meet the criteria for this security protocol, the CCM mode was designed.

    6. CCM works by merging two techniques such as a counter mode for encryption and the Cipher Block Chaining Message Authentication Code (CBC-MAC). Although CCM can be seen as a vulnerability due to it using the same key for “both confidentiality and integrity”, it can guarantee to never overlap the counter mode with the CBC-MAC vector.

    7. This article reviewed the WEP and the security flaws that were found with it. The writers described some short term and long-term alternative protocols that can replace WEP and how they can be implemented for securing a WLAN.

    References:
    Cam-Winget, N., Housley, R., Wagner, D., & Walker, J. (2003). Security Flaws in 802.11 Data Link Protocols. Communications of the ACM, 46(5), 35-39. https://doi.org/10.1145/769800.769823

  9. TO: Professor J. Ellis

    FROM: Brianna D. Persaud

    DATE: September 9th, 2020

    SUBJECT: Reversed Outline of Article

    1. African American face hardships don’t when pursuing a career in STEM.

    2. Studies have been conducted to decipher what the cause of African Americans not being successful in STEM.

    3. Students of African American and/or Latin descents do better when they form peer relationships within school.

    4. Establishing peer relationships provides a boost in confidence, passion and companionship.

    5. Although peer support uplifted students of color, lack of representation of color within the science department decreased determination to succeed.

    6. Racism within education plays a big role specifically for African Americans not reaching their goal of obtaining degrees in science.

    7. The racism factor is not present within HBCU’s resulting in higher graduation rates and better relationships within African American universities.

    8. CRT is constantly being used to fight for people of color that can’t fight for themselves.

    9. Studied deduced that STEM students’ success also branches from initial knowledge of their studies and initial interest dating back to as far as when they were kids.

    10. Dr. Jenkins was asked follow up questions based on her ground theory to provide further knowledge.

    11. While conducting her studies, Dr. Jenkins is able to observe the racism that African American STEM students face in college to try to come up with a solution.

    12. Due to Dr. Jenkins accomplishments, counter-story telling is a very appropriate approach to her studies.

    13. Dr. Jenkins states that same race peers within and outside of the STEM community helped her through her experience as an undergraduate and while pursuing her masters.

    14. Throughout Dr. Jenkins undergraduate years, she typically spent her time with same race peers and together they all studied and took classes with one another which led to the enjoyment of undergraduate classes at an HBCU.

    15. Studies show that peers have the most effect on STEM students as a whole so this was now made to be the focal points of all their studies.

    16. Various methods were used in order to establish the impact of peers along with race/racism in STEM success amongst students.

    17. CRT and counter storytelling was once again employed when determining the direct impact of race and peers for Dr. Jenkins as an African American woman.

    18. The peer support that Dr. Jenkins received from her doctoral study group is attributed for her success in science.

    19. African American peer support amongst one another is a major part of their success and due to this a culture of success similar to other races was established amongst HBCU’s.

    20. ‘Superstar Jasheed’ had a significant impact on Dr. Jenkins’ success in college.

    21. Black students at PWI’s were motivational to their peers and helped guide them to stay on track, hence why Jasheed was called ‘superstar’,along with his tremendous grades in college.

    22. Dr Jenkins felt isolated despite being around other students in her masters program since she was no longer surrounded with peers that formed relationships with her.

    23. Dr Jenkins white and Asian peers were described as “cliquish” and without Jasheed, she believed she wouldn’t have finished her masters degree.

    24. In addition to excluding Dr. Jenkins, her peers also plagiarized her work behind her back which placed her credibility in jeopardy.

    25. Dr. Jenkins was ultimately able to make it through her masters program through the support of her same race peers and undergraduate friends from her HBCU.

    26. Two friends in particular stood out to Dr. Jenkins that helped her the most in her masters degree, a male and female.

    27. Dr. Brown faced similar problems in terms of social interactions, therefore her formed a “fraternity” with other African American peers in the Watkins program.

    28. For both Dr. Jenkins and Brown, same race peers seem to have countered the effects of racism and race, while studies that were conducted show the true nature and struggle of African American STEM students.

    29. Studies also show how peer support, above all else, had the most influence on Dr. Jenkins both positively and negatively.

    30. Dr. Jenkins repeatedly points to same race peer support as the deciding factor for her enjoying her undergraduate degree and the lack of same race peer support as to why she didn’t enjoy her masters degree.

    31. Dr. Jenkins describes her masters degree experience as being oppressed due to her limitations from not having the same support as she did as an undergraduate.

    32. Dr.Jenkins method of counter storytelling essentially sheds light to the rigorous path of African American stem students and how much more difficult it is for them to succeed in STEM programs compared to other races.

    33. Studies that were reported in this article ultimately show how African American women face so many limitations and how opportunities aren’t shared equally amongst all races in STEM programs.

    34. These studies also raise questions about the African American community about what keeps them going on their pursuit as a STEM student and how their experiences can be better.

    35. HBCU’s should be looked at more often for African American students to flourish in pursuing their careers.

    36. In order for African American women to succeed in STEM programs it seems that they must establish a network of friends within their same race, and universities should try to establish policies and standards that should allow African American women to have an equal opportunity as other women.

    Reference:
    Watkins SE, Mensah FM. Peer Support and STEM Success for One African American Female Engineer. Journal of Negro Education. 2019;88(2):181-193. doi:10.7709/jnegroeducation.88.2.0181

  10. TO: Professor Ellis
    FROM: Nakeita Clarke
    DATE: Sept 9, 2020
    SUBJECT: Reverse Outline of Article

    1. Concern and anxiety regarding artificial intelligence and its potentially dangerous abilities.
    2. Considering the idea of regulating A.I or not.
    3. Definition and interpretation of A.I.
    4. Predicting an assumption regarding A.I singularity.
    5. Dangers that could occur if A.I is not regulated.
    6. Introducing the governance of A.I as a policy.
    7. The AAAI chime in on whether they agree A.I should be monitored.
    8. A.I is not likely to overtake humanity because it has no motivation.
    9. It may be already too late to attempt to create international regulations for A.I.
    10. The argument that regulating A.I will come at a social and economic cost.
    11. A.I exhibit superior medical advantage.
    12. A.I help with search and rescue.
    13. A.I is used in the psychological industry.
    14. Example of how current A.I usage in everyday technology.
    15. Limiting the progression of A.I will affect industries already benefitting from it.
    16. Focus A.I regulation on A.I enabled weaponry.
    17. Public petition urges the UN to ban weaponized A.I.
    18. Similar treaties may encourage countries to adopt one for A.I.
    19. The main challenges of weaponized A.I.
    20. Proposal of A.I Guidance system.
    21. A.I would benefit from a Guidance system.
    22. Human guidance A.I system will not work.
    23. An A.I powered guidance will have flaws.
    24. Humans are very crucial to the management of A.I systems.
    25. Is it ethical to condemn weaponized machines while opting to use machines in scenarios that would endanger humans?
    26. A.I is disrupting the job market.
    27. e-Discovery disrupts the legal job market.
    28. A.I will create an uneven job market.
    29. The effect of unemployment and income disparities can be seen in Europe.
    30. Economists believe that A.I will lead to the creation of new jobs.
    31. A committee should be created to monitor A.I’s impact on the job market.
    32. Ways to combat job loss due to A.I based initiatives.
    33. Ways to mitigate the social and economic challenges A.I presents.
    34. An almost utopian alternative to A.I’s impact is possible if society changes its response to A.I.
    35. Public open dialogue about A.I will pave the way for productive policies.

    Reference
    ETZIONI, A., & ETZIONI, O. (2017). Should Artificial Intelligence Be Regulated? Issues in Science & Technology, 33(4), 32–36. http://search.ebscohost.com/login.aspx?direct=true&db=gft&AN=124181372&site=ehost-live&scope=site

  11. TO: Prof. Ellis
    From: Kevin Andiappen
    DATE: September 9, 2020
    Subject: Reverse Outline of Article

    This is a 500-words summary of the article “THE DANGER OF USING ARTIFICIAL INTELLIGENCE IN DEVELOPMENT OF AUTONOMOUS VEHICLES” which discusses the risks that come from having Artificial intelligence in automobiles.

    1. Although self-driven cars have recently become popular, the idea has been around for years.

    2. The goal of self-driving cars is to dramatically decrease the car accident death toll which are caused primarily on human error. Artificial intelligence can process data far quicker than humans which will decrease the reaction time in a situation.

    3. At the end of November 2018, Tesla cars had traveled a total of one billion miles in autonomous mode. The statistics show one accident every 3 million miles. The department of transportation says there is an accident every 492,000 miles in America making self-driving cars seven times safer.

    4. The society of automotive engineers created a scale for determining the intelligence and capabilities of a vehicle. It goes from 0 to 5.

    5. Artificial intelligence for cars need to meet the quality and time requirements for decision making in situations like traffic.

    6. When you are programming the AI, the simplest thing to do is to have it focus only on traffic rules before decision.

    7. Tesla has created a shadow mode in its software that monitors the driver’s actions and sends the info back to them for analysis.

    8. One of the tools that helps teach self-driving cars comes from NVIDIA. It creates a lifelike interactive world that is normally used for gaming but is very useful for AI learning.

    9. During the transition from human driven cars to self-driving ones, insecure driving behavior must be recognized. A human driver with a lot of driving experience can recognize an inexperienced driver instantly. AI may be trained to have the same mentality.

    10. There is no 100% safe solution for self-driving cars.

    11. AI will be able to respond to traffic situations much faster than humans will. However, human drivers may try to abuse it by cutting in front of the cars intentionally forcing it to break or going in front of them at highway entrances.

    12. Another problem about self-driving cars is fake road signs. If you change a road closed sign to maximum speed limit 50, it will most definitely cause an accident. This can happen to a human driver and the concern is if the AI will make the same mistake.

    13. Digital light technology works like a projector. It can shine on the road to project symbols and/or lanes. This technology can be used to deceive a self-driving car to follow the fake lane and cause it to crash or go to another location.

    14. Artificial intelligence is a challenge for developers because it requires you to prepare for every possible scenario. An AI can also be evil if programmed to be. The safety precautions used in self-driving cars to prevent accidents could be reprogrammed to cause accidents.

    15. All the scenarios that were mentioned are one of many possible dangers that can come from self-driving cars. Developers of artificial intelligence need to be aware of these situations so that they can properly educate the AI.
     
    References
    Kiss, G. (2019). The Danger of Using Artificial Intelligence in Development of Autonomous Vehicles. Interdisciplinary Description of Complex Systems, 17(4), 716–722. https://doi-org.citytech.ezproxy.cuny.edu/10.7906/indecs.17.4.3

  12. To: Professor Ellis
    From: Daniel Romanowski
    Date: September 8, 2020
    Subject: Reverse Outline

    1) This article attempts to address the lack of quantum artificial intelligence policy in today’s government by providing non-technical details of what quantum artificial intelligence is, and the threats or opportunities it may pose.
    2) The article is not intended as a full scientific explanation of quantum theory, or artificial intelligence, and sources for this article are public.
    3) Quantum Mechanics Theory is different from Classical Mechanics in physics by which it can better explain, and predict the behavior of very small particles such as atoms.
    4) Quantum effects based in technology are that a quantum unit may behave as a particle or a wave due to wave-particle duality.
    5) The “Quantum Superposition” is that a particle is not in one place, or in one state at any particular time, but rather a particle can be in all positions, and all states at the same time.
    6) Decoherence is when a quantum unit is no longer in a quantum superposition, and can be explained using classic mechanics.
    7) Measurement/Observation of a quantum unit is such that the observed is not separated from the instrument being used to observe, and they are one of the same.
    8) “Quantum Entanglement” is that a particle being observed will change its own state due to the observation, and change the state of the particle it is entangled with regardless of there being no physical connection.
    9) “Artificial Intelligence” can be described as a man-made, autonomous system that can make decisions for itself.
    10) There are two types of AI; strong, and weak. Weak is for one task, were as strong is for more complex tasks. “Artificial Super Intelligence” is when AI becomes smarter than human brains.
    11) Most of today’s AIs are considered to be “weak AIs”, given that they cannot perform more than one task. Examples of current AIs are ones found in computer gaming, and even “Siri”.
    12) In order to move from “Weak AI,” to “Strong AI,” a vast improvement in computing power is needed, such as “quantum computing”.
    13) Companies like Microsoft, and Google are pursuing development in AI, and NASA has it’s own lab as well.
    14) IBM has noted that not only will quantum computer have more processing power than traditional computer, but they will also have the ability to do things differently.
    15) Neural Nets are the way computers use machine learning to perform tasks.
    16) Current neural networks use “nodes” to store data, with nodes stacked on top of each other sharing information top to bottom. Research into quantum computing aims to change this structure.
    17) Future computers will not be AI driven, or quantum, but both at the same time.
    18) Quantum computing within itself is not considered to be dangerous, but has the potential to be.

    Reference:
    Taylor, R. (2020, January 19). Quantum Artificial Intelligence: A “precautionary” U.S. approach? Retrieved September 03, 2020, from https://www.sciencedirect.com/science/article/pii/S030859612030001X

  13. TO: Professor Ellis
    FROM: Lia Barbu
    DATE: September 9, 2020
    SUBJECT: Reverse Outline of Article
    1. Cyber Security is essential to protect our systems, and it will be crucial in the future.
    2. Cybersecurity must keep up with the new technologies and to be ready for a new computational model as quantum technologies.
    3. Quantum theory was by far one of the significant technological developments of the 20th century.
    4. In the last few years, the control of the quantum systems developed. A breakthrough will be possible soon due to the interest in the quantum technologies programs by various nations and big tech companies like Google, IBM, Microsoft, etc. who start developing quantum hardware and software.
    5. Quantum computers will be the most valuable quantum technology due to their computational power.
    6. Quantum technologies’ achievements already exist Google’s processor “Bristlecone” and satellite quantum communication.
    7. The quantum computers are no longer a myth, and cybersecurity must prepare for this new era.
    8. Quantum cybersecurity was developed, and it deals with attacks caused by quantum technologies.
    9. Quantum technologies can bring negative or positive aspects to cybersecurity. There are three scenarios: in one scenario, everything is secure, and the other two explore what new challenges quantum technologies can create.
    10. In the first scenario, the honest party has classic technologies, and the adversary has a large quantum computer.
    11. In the second scenario, the honest party has limited access to quantum technologies, and the adversary can use any quantum technologies.
    12. The third scenario looks in the future: quantum computation devices and the parts implicated in the process would protect their data and be secure.
    13. The focus will be on quantum technology’s effects on cryptographic attacks and attacks on the new quantum hardware.
    14. The article does not offer a comprehensive file of quantum cybersecurity research either quantum cryptography developed.
    15. The article wants to clarify the false impression and make the quantum cybersecurity research accessible to non-experts.
    16. The authors explain that quantum computer power is a misconception and try to clarify four of the most common myths: are faster, instantaneously perform a computation, solving virtually NP-complete problems, and using hard problems for a quantum computer be sufficient to make a cryptographic protocol secure.
    17. Even that quantum attacks seem far away; there are three essential rationales why we must address it now: security can hit retroactively, to create secure cryptographic solutions, and to be ready to implement the new technology.
    18. Cybersecurity research in post-quantum cryptography is divided into three classes considering adversary use of quantum technology: access to an oracle/quantum computer, modification of security definition, and changes required and follow the classical security protocols.
    19. There are cryptosystems considered to be secure to a quantum computer attack, and the article considers three issues: confidence, usability, and efficiency.
    20. Next is explained what can happen when the adversary can make changes to security notions and what steps should be taken to prevent and stop this action.
    21. Quantum rewinding is a technique that adds a mechanism that enforces malicious adversaries to behave as a weak one.
    22. Quantum technology also brings positive aspects to cybersecurity if included in the honest security protocols.
    23. As quantum technologies develop, quantum protocols should become a reality.
    24. Practicality includes research that includes quantum technologies presently achievable.
    25. Quantum hacking represents that quantum gadgets open a door for new attacks like side-channels attacks specific to these gadgets.
    26. Device-independence is secure to side attacks but has high resources cost.
    27. Standardization is essential and should be created for the new quantum technology.
    28. Protocols should be created for the new technology.
    29. Quantum technology will become a major part of the computing and communication environment.
    WALLDEN, P., & KASHEFI, E. (2019). Cyber security in the quantum era. Communications of the ACM, 62(4), 120–129. https://doi.org/10.1145/3241037

  14. TO: Prof. Ellis
    FROM: Albert Chan
    DATE: Sept. 9, 2020
    SUBJECT: Reverse Outline of Article

    The purpose of this Reverse Outline is to condense the contents of “A First Look At Zoombombing”, with the purpose of the article analyzing why and how zoombombing (henceforth known as zbing) occurs, then suggesting a simple solution to the issue of zbing.
    Introduction
    1. Identifies various virtual conferencing tools before mentioning the recent series of attacks of zbing
    2. Discussion of best practices to prevent zbing but not enough insider information on how the attacks are done (e.g. brute force, insider, etc)
    3. Cursory introduction on future analysis on 2 social media platforms (Twitter, 4chan) and research on how to identify which postings of meeting credentials are “asking” attackers to zoombomb(henceforth known as zb) a meeting room
    4. Research shows that most (above 50%) postings on both social media platforms are indeed “asking” attackers to zb their meeting room. Disclaimer telling readers that nothing is censored, don’t be offended
    Background
    5. Threat model: Call to action, Attackers Gather, Attackers Gain Information, Attackers Harass Participants
    6. Identifies top 10 most used online meeting software (unsure if in listed order), has embedded links to each service
    7. Chart for data on each service (free or not, how much to upgrade), also lists year of release to show how new said service is. Zoom established in 2011, but has risen to prominence during the pandemic, thus coining the term zbing
    8. 8/10 of the top 10 popular online meeting services are free to use
    9. Short paragraph stating that there will be comparisons between all 8/10 online meeting services that are free to use
    10. All services have a “you know meeting ID, you know the way in”. Only 4 services provide password, 2 services provide waiting room, noticed a typo in the paragraph, also look into limits of a meeting room (participants, time)
    Datasets
    11. P1 – P6: Selects Twitter and 4chan, describes processes to collect data (eg creating an API to collect posts [Twitter]), analysis on live threads with meeting ID on Zoom (4chan) or posts with meeting ID (Twitter). View on ethics: publicly sourced, everything can be found, no anonymity
    Identifying Zoombombing Threads
    12. P1 – P3: Introduction on how researchers separated zbing posts from non-zbing posts by organizing a codebook. Most likely still some false positives and false negatives in the end
    13. P4 – P13: On 4chan, Zoom and Google Meet have ~50% accuracy of zbing; ~50% of the posted links and messages are people asking to be zb-ed. On Twitter, much less % of people ask for attackers. Note: majority if not all Google Hangouts & Skype links posted with good intentions, not bad ones.
    14. P14 – P23: Identification of each post asking to be attacked, time, insider/not insider, others
    Quantitative Analysis -> VIII. Related Work
    15. In-depth Analysis of zbing and identification as well as separation of terms, themes, identity, contact
    IX. Conclusion
    16. Unique meeting links for each participant = solution

    References
    Ling, C., Balci, U., Blackburn, J., Stringhini, G. (2020). A First Look at Zoombombing. Computers and Society, 1(1), 1-14. https://arxiv.org/pdf/2009.03822.pdf

  15. TO: Professor Ellis
    FROM: Teodor Barbu
    DATE: September 9, 2020
    SUBJECT: Reverse Outline of Article
    1. Easy access to information facilitated groups of people working on open projects over the internet and innovative companies knew how to exploit this as an important tool.
    2. GitHub and SourceForge are two platforms where people can open projects and developers work together to achieve a common goal.
    3. Open-source software development (OSSD) became an alternative to getting outside knowledge and a way to benefit both the organizations and developers.
    4. To effectively tackle the problems of OSSD, developers were separated in categories that either explore the internal resources or exploit the outside knowledge.
    5. The authors of this article investigate how exploration and exploitation impact the development of a project.
    6. For this research were gathered data from 17,691 repositories from GitHub.
    7. A team of developers, called an organization, can work on one or more projects, and can collaborate with other organizations to complete the project.
    8. GitHub encourages collaboration from the outside of an organization as a way of bringing new ideas and solutions.
    9. Organizational learning is how a company is exploiting and exploring internal an external knowledge and its capability to transform it into organizational asset.
    10. Exploration and exploitation are ideas of organizational learning theory touched by Perretti and Negro, Rullani and Frederiksen, or March. They considered collaborations with new developers from outside of a team vital for the survival of the organization.
    11. This experiment considers that both exploration and exploitation have a positive impact on the project performance. So, what percentage of any of them is the best approach?
    12. In gathering and analyzing the data was used a web crawler powered by Python focused on GitHub projects bigger than 300 days and with at least five people.
    13. The number of commits was considered relevant for developers’ performance.
    14. Exploitation and Exploration ware considered the independent variables of the experiment respectively the members who participate on a project vs external developers of that project.
    15. This experiment shows that repositories have more external developers and the successful completion of the project is positively impacted with more external interactions.
    16. In the cases followed with release software we also notice an increased external collaboration.
    17. Three models reveals that a repository is successful if the number of external collaborations is higher and the performance drops if the number of internal members is higher.
    18. As Model 4 monitors software release cases, we notice that performance is affected after the release just because all the development is switched to maintenance done by the internal team and external interaction is not mandatory anymore.
    19. This experiment demonstrates the importance of free unlimited interaction in OSSD.
    20. Exchange of ideas with collaborators outside the team proved to be benefic for the success of the project and for the future consistence of the team.

    Lee, S., Baek, H., & Oh, S. (2020). The role of openness in open collaboration: A focus on open‐source software development projects. ETRI Journal, 42(2), 196–204. https://doi.org/10.4218/etrij.2018-0536

  16. TO: Professor Ellis
    FROM: David Beauge
    DATE: Sept 9, 2020
    SUBJECT: 500-Word Summary
    This is a 500-word summary of “Analog Games for Information Security (Awareness) in the Digital World” by Margit Scholl. It explains how the process of “Digital Transformation” can be smoothed out by using gamification to educate them on the importance of information security.
    1. The professional world is rapidly becoming more digitized and it’s our responsibility to guide that trajectory towards a beneficial path.
    2. Software programming has an arguably deliberate impact on real life social norms.
    3. Digital vulnerabilities have real life, tangible consequences on both professional and personal, possibly bodily, levels.
    4. Information security is a combination of ensuring that new technology is innately secure and properly educating IT professionals and end users.
    5. Awareness is the key to combatting increasingly sophisticated types of attacks such as social engineering, but it’s an ongoing process rather than some ultimate state.
    6. End Users need to be consistently refreshed and updated on the types of attacks for Awareness to remain effective.
    7. The intangibility of the digital world is a barrier for comprehension of the new work culture.
    8. Interactive methods of raising awareness are far more effective than non-interactive ones.
    9. Games are an extremely effective tool for teaching due to their interactive nature and lack of real consequence.
    10. Gaming is receiving academic recognition as a learning tool but its been seldom adopted because of inflated expectations.
    11. Despite the acknowledged effectiveness of gamification there isn’t a lot of empirical evidence to validate it.
    12. Introducing game mechanics, like points, to the workplace can encourage change in behavior.
    13. Game Based Learning consists of motivation, feedback, practice, and reinforcement.
    14. Traditional Web based teaching tools such as PowerPoints lack communication while games can promote scenario driven problem-solving skill.
    15. Games make the players want to learn more intrinsically.
    16. While games have unanimously been accepted as a powerful teaching tool, the actual design and execution need more study.
    17. Besides software, Tabletop games are another possible place to gain inspiration from due to their tangibility.
    18. Tabletop games encourage beneficial behavioral changes because you have to learn and abide by the rules in order to master the game.
    19. Games have been used to encourage more young women to be interested in computer science and have encouraged voluntary repetition that resulted in retained knowledge.
    20. Tabletop games are good for evoking an emotion and software is good for encouraging repetition.
    21. There is no practical difference between analog and digital games.
    22. Infosec awareness requires active engagement and participation from the end users to be effective.
    23. There are three elements to promote: Knowledge, Intention, and Ability.
    24. Games can make people voluntarily engage in activities that are normally considered boring.

    Scholl, M. (2018). Analog Games for Information Security (Awareness) in the Digital World. Annual International Conference on Computer Science Education: Innovation & Technology, 39–46. https://doi-org.citytech.ezproxy.cuny.edu/10.5176/2251-2195_CSEIT18.120

  17. TO: Professor Ellis
    FROM: Alvin Ferreira
    DATE: September 9,2020
    SUBJECT: 500-word summary reverse outline

    This is a 500-word summary of ” AN ANALYTICAL STUDY OF IOT BASED APPLICATIONS” the article discusses the qualities that IoT has provided over time to businesses, home and building owners, cities and environmental analysis. The IOT basically is a system that has the ability to transfer data over the network without the need for human –to- human or computer –to- human interaction.

    1-The Internet has important information about the process encounter in the achievement of advance technology. The web is worth more in every aspect.

    2- IoT software develops a sense of behavior by accepting or reinforcing choices to accomplish types of ways they can share their data. These things may obtain data from various segment or they could be involved in different administrations.

    3- The latest model of the Internet of things by 2020 would be testing and shipping. There will be unmatched incentives for storage and mailing: remotes, individually related wired sensors, machines, inventions, spaces and levels along with M2M RFID gadgets. Interlinked.

    4-IoT will improve urban communities reassuring open traffic reducing business security and protecting population.

    5- IoT will include transport framework, human service framework, climate control framework that will provide access to airport, rail and transit data from local sites for transmission. The ways of the Internet will eliminate the metro area.

    6. Increasing Wi-Fi Internet in the home computer business is largely an integrated business view of devices accessed by electronic gadget.

    7.A lot of organization is thinking about building stages that combine robotics building such like human service inspection, life inspection and remote sensing research and status of homes and building. In home structure many devices will work well with the web.

    8. The Internet will provide a smart building management framework that will consist of big data source that office managers monitor the use of energy of structure.
    9. A smart grid is incredible with a data and controller to create wonderful life energy. A smart grid will control two-way exchanges between supplier and buyers.

    10- Smart grit key data and matching innovation elements will include progress detection and monitoring for management streams; a base station computerized for to transmit information via the Internet.

    11.Smart Health is for hospitals that need to monitor their patients’ physical condition could use IOT. IOT contains sensors that are used to collect all physical data and user portals and the cloud, in order to test and store data, and send remote information to do later testing and review.

    12- Smart Health uses mechanism to secure confident sensor health information distribution, use complex calculation to break up information share it with physicians via remote network.

    13-Smart mobility and transportation is managed by the IoT application to apply the standards of purchase and research participation. The client initiated this procedure, disagreeing with course preferences and outlining some of the emphasis on fraudulent disguises of intelligence programme.

    14.IoT could be used where the electric vehicles are a part of traffic. They are needed to reduce the cost of fuel and the impact of hazardous environmental conditions has greatly increased drivers’ attention.

    15.Smart mobility and transportation arrangement was implemented with a variety of capabilities, e.g. testing for Li-on batteries, remote control with online troubleshooting and a fault-free ad covering supports costs.

    16- Smart factory will help to integrate other feature, including computer-based logic, machine learning and computer information tracking tasks, and to coordinate assembly process with M2M

    17.The M2M keys chosen through the “Mechanical” sections will place a heavy emphasis on smart factory line and key concept data. Smart factory Mechanical processes will include less time support, less obscurity and less efficiency on spare.

    18. Industry, which is called the fourth term efficient business. As, depend on a numerical framework that can be configured to connect to the web.

    19. There have been numerous inquires about efforts to tackle natural pollution and waste. Nature requires smart surveillance and control methods and actions.

    20. The state has different Internet of things that can be accessed and classifiable into two main categories: Natural Property Management and Ecofriendly Quality and Security management. Managing the property identifies each common property, animals, airplanes, and forest, fish, coal, oil, grain, fresh water, wind, gold, copper and iron.

    21. IoT research can provide a compelling method to communicate the resources of each of these sensors and to highlight the choice of suitable alternative to those sources of use.

    22.IoT innovation can monitor and manage air quality and gather information from urban areas.

    23-IoT can help to estimate plant emissions by remotely processing firewood flames or by helping with farming.

    24-Conclusion – IoT is innovation that connects things to people and the web. The IoT requires a structured approach for designing, accrediting, certifying programs, conference and events, each with a single and specific use. IoT is a fast growing development with a bright future.

    Reference

    Kumar, Devendra.(2020) AN ANALYTICAL STUDY OF IOT BASED APPLICATION.
    Acta Technica Corvininesis-Bulletin of Engineering. Vol 13 Issue 1, 73-78.

  18. TO: Professor Jason W. Ellis
    FROM: Mahim M. Pritom
    DATE: Sept 9, 2020
    SUBJECT: Reverse Outline of Article
    This is a reverse outline of the article “Mobile-Based Driver Sleepiness Detection Using Facial Landmarks and Analysis of EAR Values” which discusses about the facial recognition system capable of identifying closed eyes and generating results of the Eye Aspect Ratio(EAR) by analyzing Facial Landmark points with accuracy rate about 92.85%
    1. According to the studies, sleepiness while driving causes injuries from minor to life threating that results in death in both developed and developing countries.
    2. Even though automotive industries started to develop special devices designed to keep drivers awake while driving, only certain brands of vehicles such as Mercedes-Benz, BMW, Volvo have the technology due to the expensive production cost leaving the people driving vehicle equipped with simple technology still at risk of sleepiness driving related injuries.
    3. Rahman et al. conducted a study to identify sleepiness based on eye blink analysis by using a webcam connected to a computer. He used Viola-Jones method to detect the eye area of the face using the Haar-like feature algorithm that helps exposing the sleepy eyes and evaluating the frame of the closed-eyes caught on camera.
    4. Jacobe et al. conducted another study utilizing two models of Artificial Neural Network (ANN) to predict sleepiness in a driving simulation.
    5. Even though both studies gave satisfying results, there were problems such as converting the color of each frame, inadequate detection process, and obstacles to install the camera inside the car.
    6. Mohammad et al. utilized the Haar Cascade Classifier by using a mobile to detect and observe the eye sclera both closed and opened situation to determine the color reflecting back.
    7. Even though this method can generate high accuracy result, there are some limitations such as the driver’s face must be aimed directly towards the camera, needs further development to recognize eye areas precisely.
    8. Another method was proposed by Jabbar et al. to detect drowsiness by combining Deep Neural Networks and extraction of facial points.
    9. Although this system could identify drowsy driver by 81%, the application process of this method requires a computer with high specification because of the complex learning process which is very expensive and time consuming.
    10. Soukupova and Cech used EAR method to measure the facial landmark in the eye area using Euclidean Distance that helps recognizing the blinks using 6 different landmarks. Since the number of extraction points applied in this study were limited, most influential points in closed eye detection are selected to speed up the computing process.
    11. Tree Regression method is applied to complete the precision procedure of incompletely processed points that are produced during the extraction process.
    12. Library is used to determine the number of dots generated on the face.
    13. Face detection is utilized as the library to detect the extracted facial landmark dots using a face detector developed by Google.
    14. Soukupova and Cech assigned a threshold value of 0.20 comparing with the EAR resulting opened eyes if EAR>0.20 or closed eyes if EAR<0.20
    References
    Huda, C., Tolle, H., & Utaminingrum, F. (2020). Mobile-Based Driver
    Sleepiness Detection Using Facial Landmarks and Analysis of
    EAR Values. International Journal of Interactive Mobile
    Technologies, 14(14), 16–30. https://doi-org
    /10.3991/ijim.v14i14.14105

  19. TO: Professor Ellis
    FROM: Adewale R. Adeyemi
    DATE: 09/06/2020
    SUBJECT: Reverse Outline of Article
    1. Medical internet of things(MIOT) is a group of devices that can connect to the internet, measure, or monitor patient vital signs through wearable and implantable devices, and it has been and efficient new technology for the healthcare system.
    2. The MIOT structure consists of the perception layer which collects vital data through wearables, the network layer which transmits the data collected the perception layer and the application layer which provides the interface needed by the users and also integrates the information from the other two layers.
    3. As MIOT is been made use of extensively by more patients, security and privacy of these patient’s data cannot be taken for granted.
    4. Taking security and privacy as the backbone of Medical internet of things is paramount to its success.
    5. Due to the amount of real-time data Medical internet of things transmits, it is important to provide enough resources to protect patient’s security and privacy. Below is the 4 security and privacy recommendation.
    6. Data integrity: indicates trustworthiness of data, data as to be intact and unchanged. Data usability: ensures the ability of authorized user to derive useful information from data. Data auditing: is an effective way to track quality of data as poor-quality data can impact decision making. Patient information: this deals with how patient sensitive data like sexual orientation and genetic information can be made unavailable and unreadable to unauthorized users.
    7. Most MIOT devices have very low memory and the data that has been collected needs to be stored. cloud storage is currently been used and it as some existing solution to security and privacy requirements.
    8. Encryption: this is carried out through cryptography which takes plain text applies mathematics algorithm the text to make it a cipher. It is implemented at three levels of communication, link, node, and end-to-end encryption. No is the most secure of the three because it does not all data transmission on plain text in the network node.
    9. Although it is very important to secure patient data however using complex algorithms which will require more resources and slow down transmission is not recommended. As security, privacy and limited resources has been a major stomping block for the ehealth system. A few scientists have proposed how key transfer can be managed and while saving resources at the time.
    10. Abdzen and Tanjaoui proposed a key management scheme that is lightweight, strong and uses less resources. While Gongal et al also designed lightweight algorithms and encryption based on the problems the healthcare system is facing and improved on previous algorithms.
    11. Access control another existing solution which authenticates and authorizes users based on the admin polices, determine who can view sensitive data. Some encryption mode is also implemented.
    12. Access control is important because patient’s information is shared electronically. There have also been some setbacks in the authorization been utilized currently because of its noncryptographic approach.
    13. Third party auditing another existing solution. As we know, MIOT devices data are stored in the cloud. It is very crucial to audit the cloud service provider. To figure out if their practice is ethical and patience information is not been sold or unprotected. This is mostly done by a third-party entity. Auditing can be dynamic, batch, or based on performance metric.
    14. Overtime different auditing methods like supervised machine learning approach and relational learning methods have been proposed and studied. This can all be at a huge cost, however Govaert et al found in a study that surgical auditing can reduce costs astronomically.
    15. Among other solutions is data anonymization which consists of sensitive patient identifiers and sensitive data. The current version of k-anonymity being used has some vulnerabilities, but it is been improved upon by various algorithm like Liu and Li cluster methods.
    16. With the advancement of technology, future security, and privacy challenges in MIOT will arise. Among them is insecure network (WIFI) which can be vulnerable to man in the middle attack, lightweight protocols for devices and data sharing

    References
    Sun, W., Cai, Z., Li, Y., Liu, F., Fang, S., & Wang, G. (2018). Security and Privacy in the Medical Internet of Things: A Review. Security & Communication Networks, 1–9. https://doi.org /10.1155/2018/5978636

  20. To : Professor Ellis
    From :David Requena
    Date: Sept 08, 2020
    Subject: 500-Word Summary

    ` Although the innovation of Cloud Computing has changed many technologies, it also arises new issues with computing, security and several other aspects. As with every technological invention, new security measures must be taken as we further our technological knowledge. In today’s world, there are already security measures when it comes to dealing with the possible threats to Cloud Computing. However, traditional and functional security is constantly being depreciated. The following methods are the ones that are currently considered as solutions for risks towards cloud security – trust in the third parties, security identification of the threats, and better security using cryptography.
    Cloud Services:
    There are three main types of cloud services, each with a different function or purpose, and one common. The three models are Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). IaaS basically gives the consumer the thing it needs to run, allowing the consumer to deploy and run the software. This includes storage, network, and computer resources. PaaS gives the user the ability to deploy in the cloud infrastructure. This service is usually provided by a third party. PaaS is mainly used to develop software on its own infrastructure. SaaS allows a third party to provide and host the software for their customer’s use over the internet.
    Trust is a major factor in any type of cloud-related technology. This is because it’s a globalized service, many people in various countries interact with it. Third party companies are the ones that provide the different types of cloud services to its consumers. They are the ones that overview from security to privacy. According to the article ‘Addressing Cloud Computing Security Issues’, “Third parties are trusted within a cloud environment by enabling trust and using cryptography to ensure the confidentiality, integrity and authenticity of data and communications while attempting to address specific security vulnerabilities.” This simply means that it is possible to trust the third parties if they are willing to commit to help and secure the servers by being private and encrypting it so it could be harder to break into, even if someone tries. The article also states that the purpose of cloud computing is to have “the ability to clearly identify, authenticate, authorize and monitor who or what is accessing the assets of an organization is essential to protecting an IS from threats and vulnerabilities.” Being able to have trust in a company is a difficult action for another company because it is harder to verify ever action if it’s not being watched and constantly modified. Therefore, people have a hard time outsourcing what’s needed to be done and those that are to be done within the company. The way to trust a company is to have some sort of barrier or filter when it comes to the information, you’re sharing with your partner company. “Separation is the key ingredient of any secure system and is based on the ability to create boundaries between entities that must be protected, and those which cannot be trusted.” This is a great solution for any company, if both the third party and the company commit.
    There are many threats in Cloud Computing, but first they need to be identified. Cloud Computing is a fairly new technology that traditional securities have already countered, but because it’s a new technology, it requires a different approach to security. First, threats must need to be identified, which may take some time because there are several areas like “availability and reliability issues, data integrity, recovery, privacy and auditing” to consider. The ability to identify the vulnerabilities are very complicated because there are building blocks to be used in designing secure systems. These important aspects of security apply to three broad categories of assets which are necessary to be secured – data, software, and hardware resources. Building blocks are in basic systems that can be reused to protect and deploy faster solutions. This is to ensure that it is developed and deployed in the areas that are having security problems. The reason that they work like this is so they can target different areas at the same time. For example, if a cloud is having a problem that include data being lost and a data breach, a building block can help solve these problems if it was developed in that specific way.
    The third way to make cloud environment more secure is by having implement cryptography. Many times the way the hacker are able to trust pass the security by finding outdated security measures and. According to the article, the best way to secure is by the, “use of a combination of Public Key Cryptography, Single-Sign-On technology 
 to securely identify and authenticate implicated entities.” . A public key cryptography is a modern cryptographic method of communicating safely without having agreeing in a secret key. This is a method that uses private key and a public key using algorithm to secure the data. The way this work for example is the sender uses the receiver’s public key to encrypt the message. This way they only way to decrypt it is by using the receiver’s private key. The Single-Sign-On technology(SSO) is to have users only have one password to access many applications not having to have multiple credentials. One example for this is google services, when you have a google account you are instantly granted many services like google drive, google photos, etc. The way to access all these services is just by logging in one time and you will have access to everything thanks to SSO. These two different ways to make logging in and transferring data more safe for everyone who is involved
    References:
    Jungwoo Ryoo, Syed Rizvi ,William Aiken,John Kissell, “Cloud Security Auditing: Challenges and Emerging Approaches,” InfoQ, March 08.2015. [Online],Available:https://www.infoq.com

  21. TO: Professor Ellis
    FROM: Mamadou M. Bah
    DATE: Sept 9, 2020
    SUBJECT: 500-Word Summary

    1. The techniques that are used to store energy are various, and they are the future for the Smart Grid system that are used to improve the existing energy system.
    2. Renewable Energy has many advantages in providing clean energy, but it also has some disadvantage because it is not popular, and it is expensive. To reduce the disadvantages that Renewable Energy face today, such as cost, popularity, production, and sustainability, storing energy in different ways is a good solution.
    3. As the demand and supply of electricity vary all the times, Energy Storage is used to store energy when there is excess of production and use that energy when the demand is higher than the production.
    4. Energy that is generated by Renewable Energy can use super capacitors for a short-term storage and batteries for long term storage.
    5. Despite the diversity of battery technologies, they are two types that are mostly used, the one we can only use once and the other we can use and recharge it, where the first one uses chemical reaction to produce electricity.
    6. The rechargeable battery cell is the one that is used for Energy Storage technologies which is classified into electrical, mechanical, electromechanical and thermal.
    7. Batteries are made of cells that are connected in series by electrolytes where the charging and discharging process follow the process of reduction-oxidation.
    8. There are different Battery technologies with different forms such as lithium-ion, sodium-sulfur, flow, lead acid and many others, and these batteries are costly for customer.
    9. Multiple types of rechargeable batteries are described in this article and they are very important for the Renewable Energy sector and the smart grid system development.
    10. The Lithium-ion battery has many similarities to the other battery technologies, it has high performance, but it has some big disadvantages such as cost and sensitive to high temperatures which needs regulations.
    11. We can find Lithium-ion batteries in several types in the market that consist of graphite and lithium metal amalgams and they are used in cars, phones and many other things.
    12. Lithium-sulfur batteries are used in electric car and smart grid system because they are very light weight and they can store a big amount of energy.
    13. Lithium iron phosphate is one the best types of batteries so far because they are safe, can be charged rapidly and they do not produce waste.
    14. Lithium-air batteries can be found in electronics, electric car and Grid System and it has high specific energy that can be compared to liquid fuels;
    15. Sodium based batteries operate in room temperature and they have negative potential and specific capacity.
    16. Sodium-sulfur batteries operates at 300 degrees Celsius and their charging possess are efficient and they can be found in vehicles and stationary applications.
    17. Sodium nickel chloride battery also called ZEBRA has a low internal resistance and high specific energy which makes it expensive, and it operates at high temperatures.
    18. Flow batteries are built with two separates sides that is composed with chemical energy and the system is easy to use, reliable, and keep energy for long period of time.
    19. Vanadium redox flow battery is a combination of ion metals, and there is only one technology that is commercialized.
    20. Zinc-bromine batteries uses zinc metal for its anode plate and bromine for the cathode plate, and the energy is stored by the zinc metal.
    21. Nickel-based batteries are divided into Nickel metal hydride and nickel-cadmium batteries which are described in the next two paragraphs.
    22. Nickel metal hydride batteries are batteries that can be recharged, and they are used in many industrial applications such as cameras, computers, medical equipment smart grid and telecommunications because of their low cost, efficiency, charging capacity, low resistance and many other benefits.
    23. Nickel-cadmium batteries are used in mobile phones, laptops, telecommunications, but they have low cycle life and they requires some maintenance.
    24. Lead acids batteries are the most used battery technology around the world because of their low lost, reliability, life cycle, weight, and rechargeability.
    25. Metal air batteries are non-rechargeable battery, they have a long cycle life, highest energy density and they used mostly in telecommunication devices.

    Salkuti, S. R. (2020). Comparative analysis of electrochemical energy
    storage technologies for smart grid. Telkomnika, 18(4), 2118–
    2124. https://doi.org /10.12928/TELKOMNIKA.v18i4.14039

  22. TO: Professor Ellis
    FROM: Shamia Campbell
    DATE: Sep 9, 2020
    SUBJECT: Reverse Outline of Article
    1.This article abstract tells us about how the flood flash is becoming more of a natural disaster and they need a good response system to give them more accurate and reliable data on the flood flash.
    2.The floods endangered a lot of people in the villages and cost them their lives. These floods started to become damaging and dangerous because of the climate change that was going on in the world.
    3.The amount of flood damages can make construction work very expensive because there is a lot of work that has to be put into the damages that has to be fixed.
    4. Over decades they had local goods for the community because that will help them as a whole during these dangerous flood risks. The local government has a big role when coming to mitigation.
    5.FEWRS is an information system that can change the risks of floods about to occur.
    6. The FEWRS is instant flood information and there are three different stages of floods that it records. The three floods can make it more clear on how bad the flash floods will be.
    7.Their are advance warning to lessen the flash flood because its a tool that helps make a success on the FEWRS. The more advanced the warnings are, the more signals we will get for the information systems.
    8. The information system is important because it sends all signals for the flash flood. Signals play a big role in the systems because it puts everything in place and gives more clarity.
    9.Some systems are having a lack of information that will have an impact on the flood disaster because of the factors that contribute to the success of these systems.
    10. FEWRS has limited their focus to more on disaster management to watch more of the system. The success factor is more to focus on.
    11.The IS model is important because without it wouldn’t be any relations with the market and therefore nothing with the organization that’s there.
    12. The IS model has more interests than the other models and IS has shown their success in what they do. It has helped researchers choose their factor that will work well with the FEWRS.
    13.Some people can’t make up their mind if they want the FEWRS or IS because they must have a different understanding of these factors.
    14.The chart is showing the most important factors that will show which is to go to when wanting a good organization.
    15.Flood hazard involving engineering that can be expensive.

    Hammood, W. A., Asmara, S. M., Arshah, R. A., Hammood, O. A., Al Halbusi, H., Al-Sharafi, M. A., & Khaleefah, S. H. (2020). Factors influencing the success of information systems in flood early warning and response systems context. Telkomnika, 18(6), 2956–2961. https://doi.org/10.12928/TELKOMNIKA.v18i6.14666

  23. TO: Prof. Ellis
    FROM: Stephan Dominique
    DATE: 9/13/2020
    SUBJECT: 500-Word Reverse Outline on Internet Gaming Disorder

    This is a 500-word summary in regards to the prevalence of Internet Gaming Disorder in adolescents as well as the analysis of it among the 1990s, 2000s, and 2010s.

    1. Internet gaming disorder has been looked into seriously after the multiple cases of violence due to the prevalence of video games with a famous case being the colorado movie massacre of 2012.

    2. Internet gaming disorder must have more research done on it before it can become an official disorder with multiple different criterias.

    3. There are disputes of if the word “internet” should even be in the term “Internet Gaming Disorder” as there has been studies that people aren’t addicted to the internet itself but are using it as a platform to fuel their addiction.

    4. There are also issues with the methods to determine IGD as it has been disputed that the disorder is more comparable to gambling but that isn’t true due to money not being needed to play.

    5. IGD has been studied among different age groups with it being most prevalent in the adolescent age group, having various benefits but also various disadvantages.

    6. IGD has to be synthesized among adolescents.

    7. There was a search for articles published on IGD with 458 publications being founded but only a total of 16 articles were used due to them meeting the criteria.

    8. There were subgroup analyses performed to identify the influences of IGD such as year, recorded disorder, study location, etc.

    9. The prevalence of IGD has been recorded since the 1990s and low prevalence was found in most cases while there are barely any high prevalence cases.

    10. Studies were done on both genders and Internet Gaming Disorder was found to be a good higher in males than in females.

    11. Prevalence of IGD decreased as the years went on but was still very high in certain locations such as Asia and North America.

    12. The results of the studies of IGD determined that the disorder was more prevalent in adolescents than children with theories of IGD starting in children and becoming more rampant when they become adults and become unaware of the risks associated with it.

    13. Male adolescents are far more likely to become involved in IGD than females with males engaging in longer gaming sessions as well as being unable to resist playing the games.

    14. Male gamers are more likely to engage in challenging video games that include strategy, fighting, etc. compared to female gamers tend to play more casual games.

    15. There are less people in the world that suffer from IGD than normal gaming disorder.

    16. Numbers for IGD are inflating in Asia and it’s not a surprise as major game developers are based in Asia.

    17. One in twenty adolescents are affected by IGD and this disorder requires a lot more research.

    References

    Fam, J. Y. (2018). Prevalence of internet gaming disorder in adolescents: A meta‐analysis across three decades. Scandinavian Journal of Psychology, 59(5), 524–531. https://doi-org.citytech.ezproxy.cuny.edu/10.1111/sjop.12459

  24. TO: Professor Ellis
    FROM: Jinquan Tan
    DATE: 9/7/2020
    SUBJECT: 500-word summary reverse outline

    1.Software engineering at this time is very necessary, the preparation of skilled human resources is essential. Efforts that can be done is to develop effective learning methods, and adaptive learning is one of them.

    2.Adaptive learning technologies provide an environment that can intelligently adapt to the needs of individual learners through the presentation of appropriate information, comprehensible instructional materials, scaffolding, feedback, and recommendations based on participant characteristics and on specific situations or conditions.

    3.Adaptive learning can consist of several characteristics, namely: analytics, local, dispositional, macro, and micro.

    4.Student that have difficulty in making program algorithms can be solved.

    5.Teacher can guide students to learn programming by monitoring them.

    6.There are many adaptive learning models for programming learning.

    7. E-learning facilitated students’ psychomotor ability requires a capability that enables students to write program code directly into and evaluated by a particular module in the electronic learning.

    8. The adaptive learning concept to improve students psychomotor ability during online learning/teaching using commercial off-the-shelf LMS.

    9.The psychomotor interaction between student and LMS will be demonstrated by the use of adaptive learning in computer programming courses.

    10.The transactions processes that occur in the web API model started from LMS server.

    11.How Remote interpreter web API model Works.

    12.Research method Remote interpreter web API model.

    13.Web API Implementation.

    14.Web API model Performance Analysis.

    15. Conclusion and Future Works.

    References:

    Yuana, R. A., Leonardo, I. A., & Budiyanto, C. W. (2019). Remote interpreter API model for supporting computer programming adaptive learning. Telkomnika, 17(1), 153–160. https://doi.org/10.12928/TELKOMNIKA.v17i1.11585

  25. TO: Professor Jason Ellis
    FROM: Ye Lin Htut
    DATE: Sept 13, 2020
    SUBJECT: Reverse Outline of Article
    This is a reverse outline of the article “Drawn to distraction: A qualitative study of off-task use of educational technology” which discusses about present student experience of off-task use of educational technology and knowledgeable measuring study of students off-task use of technology during class.
    1. The student experience of off-task uses of educational technology. This experimental investigation is informed by philosophy, which varies from traditional reasoning theory by changing emphasis from mental processes to physical use of technologies.
    2. Current study on educational technology primarily depend on a rational knowledge of awareness. As any other idea this implies some of study of existence, justification, genetic beliefs, and emotional.
    3. Idea is present school of philosophy that is remaining progressively used in the study of human technology interactions. This means a change beyond traditional structures of experience and consciousness structures of experience and consciousness, and concept of requires two changes one in inner balance and characterization.
    4. Even though an expanded understanding to characterized use of technologies, philosophy focused on academics infrequently perform practical researches of people technologically mediated practices and habits.
    5. One part of a broader analysis of educational technologies arbitration of student attention in educational perspective. Students with age between 16-20 years. This certain college and its institutes services a technological policy of letting students bring their owned devices to school.
    6. Result of student use technology is common. Digital technologies have largely superseded notebooks, calculator, and pencils. Students sometimes do not even carry books to school because they can rely only on their laptops.
    7. Students frequently called the impulse to connect in off-task interest as a attraction towards and frequently visited unrelated educationally websites such as social media, which is generally used among all students. Students are fall into distraction.
    8. If class section are considered to be too hard, students fall behind and result to distraction. They become emotionally exhausted and disconnect from class and go to unrelated websites.
    9. Teachers are highly concerned of the tasks presented by off-task use of educational technology. During the class one teacher sadly explained that when students look at their laptops and laughing during math, teacher knows that it does not have anything to do with the lesson.
    10. Ever more digitalization educational system recognizing why students often use educational technologies for off-task activity is critical. In this article presented the idea of an attraction towards often visited educationally nonrelated websites.
    11. Students respond clearly to the apparent boredom of lecturing. They describe lessons as boring which is why they give into desire and become confused.
    12. As a measuring examination about off-task use of educational technology in real classrooms are increase in environmental related to new systems.
    13. How will educators handle with off-task use of educational technology? Should digital devices be banned from the classroom or are device to be control by school or teacher administrator? This is not impossible but also highly beneficial if device are only access with class related.

    References
    Draper, R. J., Smith, L., & Sabey, B. (2004). Supporting Change in Teacher Education: Using Technology as a Tool to Enhance Problem-Based Learning. Computers in the Schools, 21(1/2), 25–42. https://doi-org /10.1300/J025v21n01_03

Leave a Reply