Summary of R. Zaman et al.’s “Energy access and pandemic-resilient livelihoods: The role of solar energy safety nets”

To: Professor Jason W. Ellis

From: Pavel Hlinin

Date: March 3, 2021

Subject: 500-Word Summary of Article About Importance of Solar Energy Safety

Nets

This 500-word summary discusses issues related to restricted access to stable sources of energy for some categories of people. The authors convince the reader that stable sources of energy are especially important because during the pandemic this factor exacerbated the already poor living conditions of certain groups of people. First of all, the authors tell about the third world countries with a low standard of living. The role of solar energy safety nets increased recently because solar energy safety nets give a lot of social benefits and that is an efficient way to survive during the pandemic.

Developing countries are hit hard by pandemic of COVID-19; however, available solar energy safety nets give a chance for developing countries to resist pandemics and increase their level of living. People who live in rural areas (called “last mile”) sometimes do not have access to technology that depends on energy services. At the same time, access to energy is very important and helps poor people to increase their level of education and develops their capacity to prepare for market related or natural risks. Also, access to electricity provides access to education, jobs, and allows people to stay at home and decrease the spread of the virus.

However, people from the last mile very often do not have access to electricity. In general, access to energy is expensive and often requires government subsidies and material assistance. Energy assistance programs make energy available to the poorest groups of people. Expanding the grid in rural areas is a good solution for people who live far away. Their issues may be resolved by an independent solar home system which provides energy at the household level. Many countries have their own programs that allow to extend independent home solar systems, but sometimes the national political processes delay process of providing off-grid energy access.

As discussed above, COVID-19 hits poor people and increases difficulty in paying for energy services. Some countries take the following actions to stave off an energy crisis: a 50% cut in the price of solar kits, or help companies operate with renewable energy sources. At the same time, other countries have expanded their other federal pro-poor programs and adopted over 1,000 social programs, thereby reducing funding for energy programs. That was a reason for declining some solar energy service providers and even bankrupting them. Continuous government funding of energy programs is essential to expand access to energy for the last mile people and help them better cope with the impact of the COVID-19 pandemic. A well-designed SESN program makes it possible to get out of the current crisis as it gives employment opportunities for people living in this area and, with an increase in production potential, makes it possible to earn money by selling the energy produced to other people. Also, these programs open up a spectrum of affordable services for the poor, thereby smoothing out social inequalities.

Energy poverty affects millions of people in developing countries, limiting their ability to cope with pandemics such as COVID-19. The authors draw our attention to the fact that changing priorities in the country’s policy threaten programs supporting the development of solar energy programs. The main challenge for politicians is to keep long-term goals, even in a short-term crisis. That will help people not only survive pandemic, but even increase their level of living at all.


References

Zaman, R., Van Vliet, O., & Posch, A. (2021). Energy access and pandemic- resilient livelihoods: The role of solar energy safety nets. Energy Research & Social Science, 71, 101805. doi:10.1016/j.erss.2020.101805

TO: Prof. Ellis

FROM: Remonda Mikhael

DATE: 3/3/2021

Summary of Article About Ternary Optical Computers Construction by wang ‘s, zang’s.

The following is a 500-Word summary of a peer-review article about “Ternary Optical”. The authors go into detail about the construction and theories of ternary computers by explaining how the theory and how they were tested and what were the results. 

Optical computers were gaining more attention because of the speed and power they operate at and the first ternary optical computer was proposed by Jin et al. Several theories and additions were added to the construction of the first ternary optical computer.  What had to be determined was how fast the quality of service would be which can only be tested once the construction was complete. This article will explain the four-stage design of the ternary optical computer. The remainder of this article will explain how the systems work, which algorithms used, and future potential direction it can take.

The ternary computer is broken up to three sections, each handling specific functions that also the computer to operate sufficiently. What makes this computer different from others is that it can process multiple inputs at once, reconfigure itself to understand the user’s request, and there is plenty of space in the processor to run complex algorithms. Queueing theory which is used to measure the speed of how fast the computer can operate which there are several different ways this is tested in different locations, which all of these tests are used to test this ternary optical computer. 

Two novel strategies will be presented: immediate scheduling and computing accomplished scheduling and how they are used. According to the authors, “Under the IS strategy, the data bits of the optical processor are equally divided into n parts and each part corresponds to a small optical processor that can be independently used” (Xianchao Wang et al., 9 February 2019, p.6243). It does this by sending data to unoccupied processors to so the operations can be done quickly. 

The other strategy is computing accomplished scheduling which happens after the request has been computed. This is done to make sure all requests are operated simultaneously. This is just by sending small bits of data to the processor first, then larger bits to the processor. The requests are received by the receiving server to transferred to preprocessing service to be process into a tri-value logic operator. The information received are done in a first-come-first-served basis which is then send to processor to translates the request into computer language and begins working on the requests.

The scheduling strategies and batch size has a direct result for various service processes. Using both strategies can increase the arrive time for results to be output because there are four stages each request must undergo before the request has been completed, but the increase in response time doesn’t increase dramatically because the system operates requests in parallel, so the delay is only affected minorly. Since all requests are not only operated in parallel, but are also broken into four smaller processors which is why the operating speed is quick for ternary optical computers.

Reference

Wang X., Wang X., Zhang S., Gao S., Zhang M., Zhang J. & Xu Z. (2019). Response time of a ternary optical computer that is based on queuing systems. The Journal of Supercomputing, 76, 6238–6257. 

Weekly Writing Assignment, Week 5

This Weekly Writing Assignment is meant to help you vet or evaluate where some of your research comes from and report back what you find. Watch this week’s lecture before performing this assignment so that you learn more about the process that I suggest for discovering the information requested below.

For this assignment, refer to two journal articles from different journals that you came across in your research (or, search for your Expanded Definition term again in IEEExplore and/or Academic Search Complete to find two examples for this assignment).

Using the built-in tools in the databases where you found the article and search sites like Google, DuckDuckgo, or Bing, learn more about the specialization of the journal and the kinds of research that it publishes and find out the name of the editor-in-chief and their professional background (degrees, affiliation, and research specializations).

Then, type a short paragraph in your word process of choice that identifies the name of the two journals that you investigated for this assignment and describe in your own words what each journal specializes in. And, identify each journal’s editor-in-chief and describe their professional details, such as degrees and where they were earned, their affiliation (where they work/teach), and their research specializations (if possible to find).

Finally, copy-and-paste your paragraph into a comment added to this post.

This assignment should not take very long. Focus most of your time this week on completing a draft of your Expanded Definition essay.

Job Search Advice, Week 5

As discussed in this week’s lecture, I built an OpenLab Site called Job Search Advice. It offers help with preparing your resume, cover letter, and other materials for your job search. It includes a video lecture, sample documents, and useful links. It’s meant to be a useful resource for you all. If you know other City Tech students not in our class who might want to check it out, please feel free to share!

Summary of Han et al.’s “Geosocial Media as a Proxy for Security: A Review”

To: Prof. Ellis

From: MD Jahirul Hasan

Date: 03/03/2021

Subject: 500-Word Summary of Article About Security in Social Networking

The following is 500-word summary of a peer reviewed article “Geosocial Media as a Proxy for Security: A Review” by mr.Zhigang Han, Somgnian li, caihui cui, daojun han and Hongquan Song published in 2019 identifies various prominent themes in need of more research in the continuous growth of social security concern and cybercrime management. While the majority of the people are running after the short-term solution the author takes a different approach to redefine the concept of security in social networking where the user’s privacy and security concerns play a vital role in the development of a sustainable social networking and considered geosocial media as a proxy for this security. Social networking is a set of rules and configurations designed to preserve the integrity, confidentiality and usability of all software and hardware technologies for computer networks and data. To protect it from the ever-growing landscape of cyber threats in the wild today, any company, regardless of scale, sector or infrastructure, needs a degree of network security solutions in place. In other words, the author tries to make it clear to the people that Network security is the defense against hacking, misuse and unauthorized device alteration of access to files and directories on a computer network. In specific, geosocial media when paired with location information can be used as a proxy for security event detection and security situational awareness. This paper includes a synopsis of the geosocial media data and the associated processing/analysis methods used for detecting protection events and summarize the general framework of security-related analyses based on geosocial media. According to the authors, “Social media data provide rich information that reflects people’s social behavior. In the security field, various groups of terrorists and gangs have increasingly recognized the value of social media and have actively used it to plan and organize activities, recruit members, spread terrorist ideas and publish various terrorist messages to expand their influences” (Han et al., 2019, p. 154225. Considering the economical and moral elements of an equation the authors divide the security-related analysis tasks into two types: security events detection and security situational awareness and assessment. There are six types, including natural disasters, man-made disasters, violent incidents, and military events, sociopolitical events and others security events. Turning to analysis of different networking system, the author walks an extra mile to illustrate the general process of security-related analysis based on geosocial media, and identified two types of data sets: social media datasets and auxiliary analysis datasets, and discussed the corresponding data acquisition and preprocessing methods. Geosocial networks and apps, such as Facebook locations, are designed to allow their users to share their geolocated data. Among all the Personal Identifiable Information (PII), knowing the position of an individual is one of the greatest threats against his privacy. One of the most exciting prospects for geosocial media is its ubiquity around the world, including its widespread adoption by the urban poor in many developing nations. For instance, the spatio-temporal data of a person may be used to infer the location of his home and workplace, to track his movements and activities, to learn details about his center of interests or even to detect a change from his normal behavior. The articles summarized the progress of key technologies related to security events detection and assessing security situations, including natural language processing, social network analysis, location inference and geospatial analysis, and image or video understanding and visual analysis. The paper concludes with possible future directions and areas of research that could be addressed and investigated.

Reference

Han, Z., Li, S., Cui, C., Han, D., & Song H. (2019). Geosocial Media as a Proxy for Security: A Review. IEEE Access, 7, 154224-154238. https://doi-org.citytech.ezproxy.cuny.edu/10.1109/ACCESS.2019.2949115

Summary of Ralph et al.’s “How COVID-19 affects software developers and how their organizations can help”

TO: Prof. Ellis
FROM: Benson Huang
DATE: 3/3/2021
SUBJECT: 500-Word Summary of Article on Covid-19’s effect on Software Developers

The following is a 500-word summary of a peer-reviewed article about how they investigate the effects of the pandemic on developers’ wellbeing and productivity through surveys. Covid-19 is declared a pandemic by WHO on March 11th which resulted in lockdowns and as such many were either laid off or told to work from home. Being forced to work at home without preparing there were many problems and issues rose. According to the authors “People are less likely to comply when they are facing a loss of income, personal logistical problems, isolation, and psychological stress (as cited in DiGiovanni et al. 2004)”. Smaller businesses will try to stay open and people with basic needs at risk are less likely to comply with efforts. After the pandemic we will find more businesses allowing remote working. However, this is not practical for every business as some must work in-store and some do not have a dedicated workspace at home. There are reports of remote working being more productive but some of these reports are self-reports which may be biased. Measuring productivity for software developers is difficult as different lines of code can have varying effects on their program. Yet some companies still use it as a measure for their software developers. It has been found that software developers’ well-being is closely related to job satisfaction so keeping them happy is important. A questionnaire was sent out to collect data to find if they supported some hypotheses. The target of the study was software developers who use to work in an office but now work at home, but the survey was open to all software developers. The survey is fully anonymous with a filter question for people who did not meet the requirements. Although there was no cash for doing the survey, the authors offered to donate to an open-source project of the persons choice. To get as many responses as possible the survey was advertised on many websites, had the translation for different countries and for some countries a different website had to be used because google was blocked. According to the authors “We received 2668 total responses of which 439 did not meet our inclusion criteria and 4 were effectively blank leaving 2225” (Ralph et al., 2020, p. 4940) showing that 2 of the hypotheses were supported. Some interesting patterns were found one of which was that if someone was isolated, they tend to be more afraid. Some patterns were consistent with studies of SARS back in 2004. Overall, the results showed that software developers working from home are showing less productivity and wellbeing. As such normal productivity rates during pandemics should not be expected and employees should just accept the fact that they cannot output as much work. Some things to note about the survey Google form is blocked in some countries so an alternative is needed, working with international teams for a multilanguage survey can generate large samples, COVID-19 is creating strains on businesses, organizations, and people.


Reference


Ralph, P., Baltes, S., Adisaputri, G., Torkar, R., Kovalenko, V., Kalinowski, M., Novielli, N., Yoo, S., Devroey, X., Tan, X., Zhou, M., Turhan, B., Hoda, R., Hata, H., Milani Fard, A., & Alkadhi, R. (2020). Pandemic programming. Empirical Software Engineering, 25(6), 4927–4961. https://doi.org/10.1007/s10664-020-09875-y

Summary of Lawson’s “Rational function distribution in computer system architectures: Key to stable and secure platforms”

TO:  Prof. Ellis

FROM: Ralph Ayala

DATE:2/17/21

SUBJECT: 500-Word Summary of Article About Computer Systems

The following is a 500-word summary of an article about problems regarding implementation of applications in computer- based systems. The author discusses the effects of a model that involves technology at various levels, and decisions must be made to keep a stable and secure platform. Computer systems suffer a lack of rational function distribution in the many levels of hardware and software. Rational function distribution allows minimizing the goals that are important software elements. The issue is the combined hardware and software products of the industry have not been treated with the proper elements to perform the task of creating stable connections. A model for function distribution is used for showing the effects and costs of certain levels involving hardware and software. Each level contains different materials, and uses tools for more complicated projects. A level contains its own problem of complexity from inheriting the contents due to the process of mapping. As you go up each level, the number of people becoming active in the level increases. As each level increase, the cost of complexity increases and as it gets lower it will create less complexity. Since complexities are passed upward it has caused problems for unreliable and insecure platforms. The first principle involves giving the problem to someone else who can solve it for you. The second principle is giving the user all possibilities of what to do. The third principle is using a tool that can be adapted to perform a function. The fourth principle allows whatever design mistake is made, and determines if it can fit the needs of what has to be done. Determine if the software is useful or not. If the software becomes a mess then create software that acts as a bridge between an operating system and application on a network. The use of patches has then become useful for fixing bugs instead of using a large workforce to fix it. With the effort of stable and secure platforms, complexities can be fixed without too much effort. If there is one thing that is important it is the interface between software systems, so two approaches are created. The case of IBM System/360 turned out to contain a lot of problems regarding its complexity in decision making. Due to the overwhelming problems that had occurred, customers would not have a chance to master it in their own environment. The case of Burroughs involved multiple highly advanced products without realizing the cost and reliability needed. Had there been a more strategic plan about releasing the product, technology could have been different today. The large advancements of technology in the mid 1970s ensured that the hardware- software products that can serve good functions did not survive. The focus was then placed on the performance of processors. The compatibility cost must be made to match those safety standards, so this is a time for new architectures for computer systems to arrive. Education must be applied regarding system based knowledge to computer system architects who have worked for a lot of computer systems. Here is the role and responsibilities of a computer architect. The person must find mappings of each level and distribute functions for goals. To use a structure one must be creative and it must be central to any designers. People could just easily solve it with a solution, but there is no solution that can lead to improvements. In a field like this it is important to think about scenarios that could happen. The new dominant actor reduces the complexity of stableware platforms. There is potential for some countries to reach broad solutions regarding stable platforms. “The Russian computing industry has an early history of developing hardware–software approaches, which result in significantly simpler software” (Lawson, 2006, p. 380). The dominant customer scenario has people produce a kind of trustworthy platform. This can create potential for some catastrophe in certain areas for the business. The rebirth is the best scenario for its increase in products to fight off against other competitors. Of course the amount of effort put inside the instruction sets of such hardware must be made. The amount of competition put into such computers can help advance software. Transforming the computer industry into stableware is an amazing long term goal; however today computer systems are much needed. The vice president of research, Paul Horn, made a new field for the computer industry. This Field would require a machine that can perform at its best so users do not have to concern themselves with small details. Creating that kind of system can be quite challenging for anyone to master its complexity. Rational function distribution with autonomic computing can help contain complexities today. Large amounts of code are needed in order to achieve certain functions for the software. Computer system architects must be given with the proper knowledge to ensure secure and stable platforms. Stableware could happen in the future, but the risk to accomplish it could prove to be fetal. 

Reference

Lawson, H. W. (2006). Rational function distribution in computer system architectures: Key to stable and secure platforms. IEEE Transactions on Systems, Man, and Cybernetics—part C: Applications and Reviews, 36(3), 377-381. https://doi.org/10.1109/TSMCC.2006.871571

Summary of Andrzej J. Zaliwski “Computer Network Simulation and Network Security in a spatial Context of an Organization”

TO: Professor Jason W. Ellis.

FROM: Mamadou Diallo

Date: 03/03/2021

SUBJECT: 500-Word SUMMARY of Article about Computer Network.

Competition in the current world’s business organizations has been a helping factor to push many of them to advancement. Businesses operate under the micro and macro environment. The two environment types act as forces toward the scope of changes within the organization. Zaliwski (2005) wants to communicate to such organizations concerning the Computer Network Simulation (CNS) and Network Security Auditing (NSA) that would follow the spatial pattern. In the article, Zaliwski (2005) informs that the micro issues require immediate attention from the management system. The disruption of the business organization operations by macro-threats such as the government and the competitors is under check by the law and customs. However, the micro-level of threats involves those who are not satisfied by the laws and rules that govern the procedures. An example is the computer network threats. Therefore, the suggestion would be to have professionals who have the skills to manage the computer networks that, from the nature of the current system, must wipe out its complexity of the security-related software and easing of the security auditing methodologies. Zaliwski (2005) reports that the complexity of the system and the hardness of the methodologies make it uneasy for the staff to apply it. The proposal does not neglect that the security model must align to the policies and the procedures of the organization and working hand in hand with the organizational structure (Zaliwski, 2005). Although security systems are critical to the organization’s operations, they need to be simple for usability and easy to interact with. Besides, it needs to be cheap and involving an effective and less expensive laboratory. The use of the laboratory, in this case, is research and teaching sessions for the advancement of security system related to the computer network.

As per Zaliwski (2005), the possible method to arrive at the goal is to create a virtual computer network in a physical lab. That would mean a shortening of the physical computer chains that would have added expense to the system. The system would work with open source, commercial and rare solutions. Also, the system would require graphical network visualization. It would help the professionals to understand the data connections (Zaliwski, 2005). Besides, software for network design and administration and the management part would be necessary for the system to be effective. There is no other system that would work better except the one that involves three sub-systems. They include the spatial models, the repositories, and the virtual networks (Zaliwski, 2005). The entire system would require three computers where one would serve as the host for all virtual machines. The User Mode Linux is the creator and maintenance operator of the computer. The second would connect to the virtual world, while the third would design and keep data for auditing purposes. The system that Zaliwski (2005) describes is a lightweight one and simple for the professional use. Also, the system is cheap and affordable from the micro-business firms and the teaching departments. The auditing methodologies would be simple, unlike the existing systems that keep the professionals scratching their heads. Therefore, the solution is to move the network lab from physical to virtual.

Reference

Zaliwski, A. J. (2005). Computer network simulation and network security auditing in a spatial context of an organization. Informing Science: International Journal of an Emerging Trans discipline2(7), 159-168.

Summary of Hare’s “Noisy Operations on the Silent Battlefield”

TO: Prof. Ellis
FROM: Zeela Rafija
DATE: 03/03/2021
SUBJECT: 500-Word Summary of Article about the Cyber weapons

The following is a 500-word summary of a peer-reviewed article about the Cyber weapons which can be divided into intrusive and unintrusive capabilities. The authors discuss about the cyber weapons, Battels and suggest taking preparation for them. According to the authors, “By 2014, the Russians had honed unintrusive (but noisy and disruptive) cyber operations down to a finely tuned science” (Hare, F., & Diehl, W., 2019, p.7). Summary of “Noisy Operations on the Silent Battlefield” by Forrest Hare and William Diehl . The article tends to revolve around the advocacy for the preparation in opposition to the utilization of un-intrusive precision cyber weapons via improved integration, acquisition, and training. The article is grounded on Dipert’s classification. It involves an explanation of the two classes regarding offensive cyber capabilities. Moreover, it also consists of an illustration of specimens of such attacks along with their types. The authors of the article offer reviews of several preceding conflicts. Their reviews involve un-intrusive cyber weapons that were influential in operational terms. The authors argue that in the Dipert’s Nomenclature, the cyber arms can be classified into different categories. They regard the classification as un-intrusive and intrusive capabilities. The invasive cyber occurrences are more focused as compared to that of un-intrusive attacks. They also noted that IOT susceptibility to the UPCW attacks was shown during the month of October in the year 2016. Several examples regarding UPCW usage are demonstrated in the article. Such as the examples of its use in the local battles are provided by the article effectively. The cyber-criminal organizations have conducted several cyber-attacks in the modern conflicts that required already positioned feats. The authors have illustrated several examples in this regard. Such as the illustration of cyberwar by Russia and how it attacked the transport, financial and other systems of Ukraine is demonstrated in a detailed manner. The intrusive system requires the specific attacker to gain access. On the other hand, the un-intrusive system does not require any such access. Instead, in this system, the sensor or server etc. is degraded so that it cannot function properly for a certain amount of time. The authors offer several potential advantages associated with the employment and development of UPCW. They state that many benefits can be enjoyed by the cyber belligerent when assimilating UPCW with orthodox military processes, such as the capability of un-intrusive correctness cyber weapons is less momentary. It entails less strictly expert operators as compared to that of intrusive ones. Moreover, the cyber operator who employs UPCW can measure the efficiency of weapons more directly. The authors of the study present the challenges concerning consuming UPCW in a particular conflict. The challenges posed in the utilization of UPCW cannot be ignored. Apart from the remunerations of emerging and less cultured UPCW, the implications regarding cyber defender are apparent. Such as the defender must get ready for improved opponent utilization of UPCW in future conflicts along with the periods of increased tensions. The limitations of the article involve that it does not take any stable position on the amalgamation of EW and cyber operations. The authors further imply that EW capabilities have been and will be utilized by the opponents. These EW capabilities will be used to acquire assimilated cyber effects. The findings demonstrate that concerning the emphasis on the defense, the friendly actors should contemplate developing and investigating more improved options in association with UPCW; this will help in enhancing the features of UPCW. The article also demonstrates that UPCW should be studied more deeply along with cyber-EW occurrences. 

Reference 

Hare, F., & Diehl, W., (2019).  Noisy Operations on the silent battlefield: The Cyber Defense Review,5(1), 153- 168. https://www.jstor.org/stable/26902668