Linear Regression Project: Study of Apps Downloads by Age, By Alamgir and Emily

Linear Regression Project_Alamgir_N_Emily

Stats Final Project_Alamgir_N_Emily

final project__Alamgir_N_Emily

Posted in Uncategorized | Leave a comment

Final paper

In his article “The Philosophy of Data”, David Brooks states that we “now” have the ability to gather huge amount of data. You have to know the past to understand the present. The computer revolution over the last four decades has completely altered how data is analyzed and collected .Prior to the computer revolution, data was jotted down  and tabulated by hand in paper spreadsheets. The tables were utilized to calculate, analyze and summarize data, but calculations were finalized by hand or calculator. The limitations of stacks of paper, human error and the monetary cost of man-hours meant that manual data processing could not handle the increasing demands of modern data collection. The cost of collecting it was equivalent to the amount collected. This made the cost of collecting large amounts prohibitively expensive. The development of computers and programs designed for manipulating data eliminated the need for manual data processing. Computer programs are much more efficient and capable of processing an enormous volume of data which would take human processors enormous amounts of time to complete.

The amount of data in our world has been growing exponentially in recent years, and analyzing large data has led to a new reality of data mining.  Data mining by definition: is data processing using sophisticated data search capabilities and statistical algorithms to discover patterns and correlations in large data sets. According to IBM, Every day, we create 2.5 quintillion bytes of data — so much that 90% of the data in the world today has been created in the last two years alone. The exponential increase in volume and detail of data captured by enterprises, multimedia, social media, and the Internet created abundant interest in data.

Data mining is predominantly utilized today by companies with a strong consumer focus such as retail, financial, communication, and marketing organizations. It enables these companies to determine correlation such as price, product positioning, economic indicators, competition, and customer demographics. Furthermore, it allows them to determine the impact on sales, customer satisfaction, and corporate profits.

Moreover, corporations use point-of-sale records of customer purchases to distribute targeted promotions based on an individual’s purchase history. By mining demographic data from comment or warranty cards, the retailer could develop products and promotions to resonate to specific customer segments. For example, Netflix Entertainment mines its video rental history database to recommend rentals to individual customers. Visa can suggest products to its cardholders based on analysis of their monthly expenditures.

Target is pioneering massive data mining to transform its supplier relationships. Target captures point-of-sale transactions from over 2,900 stores in 6 countries and continuously transmits this data to its massive 7.5 terabyte Teradata data warehouse. Target allows more than 3,500 suppliers, to access data on their products and performs data analyses. These suppliers use this data to identify customer buying patterns at the store display level. They use this information to manage local store inventory and identify new merchandising opportunities. In 1995, Target computers processed over 1 million complex data queries.

In conclusion, computers have revolutionized how data is analyzed and collected in the past 4 decades. The invention of computers has radically transformed how data is recorded and tabulated. Exponential advancement in computer and processors radically transformed and equivalently made computers more efficient and capable of processing an enormous volume of data.  Data captured by enterprises, multimedia, social media, and the Internet created abundant interest in data mining. However, corporations currently are using data that has been accrued over the past for decade to penetrate in every aspect of our lives as a consumer.

 

http://www.jstor.org/discover/10.2307/1403524?uid=3739832&uid=2&uid=4&uid=3739256&sid=21102312397507

http://www.jstor.org/discover/10.2307/25471201?uid=3739832&uid=2&uid=4&uid=3739256&sid=21102312397507

http://en.wikipedia.org/wiki/Data_mining

 

 

 

 

 

 

 

 

 

 

 

Posted in Uncategorized | Leave a comment

Draft 3

In his article “The Philosophy of Data”, David Brooks states that we “now” have the ability to gather huge amount of data. You have to know the past to understand the present. The computer revolution over the last four decades has completely altered how data is analyzed and collected .Prior to the computer revolution, data was jotted down  and tabulated by hand in paper spreadsheets. The tables were utilized to calculate, analyze and summarize data, but calculations were finalized by hand or calculator. The limitations of stacks of paper, human error and the monetary cost of man-hours meant that manual data processing could not handle the increasing demands of modern data collection. The cost of collecting it was equivalent to the amount collected. This made the cost of collecting large amounts prohibitively expensive. The development of computers and programs designed for manipulating data eliminated the need for manual data processing. Computer programs are much more efficient and capable of processing an enormous volume of data which would take human processors enormous amounts of time to complete.

The amount of data in our world has been growing exponentially in recent years, and analyzing large data has led to a new reality of data mining.  Data mining by definition: is data processing using sophisticated data search capabilities and statistical algorithms to discover patterns and correlations in large data sets. According to IBM, Every day, we create 2.5 quintillion bytes of data — so much that 90% of the data in the world today has been created in the last two years alone. The exponential increase in volume and detail of data captured by enterprises, multimedia, social media, and the Internet created abundant interest in data.

Data mining is predominantly utilized today by companies with a strong consumer focus such as retail, financial, communication, and marketing organizations. It enables these companies to determine correlation such as price, product positioning, economic indicators, competition, and customer demographics. Furthermore, it allows them to determine the impact on sales, customer satisfaction, and corporate profits.

Moreover, corporations use point-of-sale records of customer purchases to distribute targeted promotions based on an individual’s purchase history. By mining demographic data from comment or warranty cards, the retailer could develop products and promotions to resonate to specific customer segments. For example, Netflix Entertainment mines its video rental history database to recommend rentals to individual customers. Visa can suggest products to its cardholders based on analysis of their monthly expenditures.

Target is pioneering massive data mining to transform its supplier relationships. Target captures point-of-sale transactions from over 2,900 stores in 6 countries and continuously transmits this data to its massive 7.5 terabyte Teradata data warehouse. Target allows more than 3,500 suppliers, to access data on their products and performs data analyses. These suppliers use this data to identify customer buying patterns at the store display level. They use this information to manage local store inventory and identify new merchandising opportunities. In 1995, Target computers processed over 1 million complex data queries.

 

 

Posted in Uncategorized | Leave a comment

Philosophy of Data

Does data can take human-being word? Or does human can work by using his or her mind to do work without data? These questions about data are the most important for me. In “the world is too much with us” by William Wordsworth the author describes that the revolution of all machine and data is making human to be more materialistic. Is there data also changes human behavior? I am wondering to know more about those questions.

Posted in Uncategorized | Leave a comment

Andrey Evdokimov. Research Assignment Final Draft

 

          In his article “What Data Can’t Do”, David Brooks discusses positive and negative aspects or “strengths and limitations” of data analysis  and how  one’s decision might  depend  on it. In the beginning author mentions the story of the chief executive of a large bank, who had to decide whether to relocate his business from crisis-driven Italy somewhere else, or stay, despite of unfavorable economic conditions. He decided to stay, notwithstanding that data suggested him to do otherwise. Staying for him meant to gain something more important than profits – to obtain people’s (both potential clients and present bank’s employees) recognition of his bank as a reliable one.

        Data analysis is crucial part of making a decision, but pretty often it is not enough to come up with the answer to a certain problem. Sometimes relationships between people, business partners, for example, or family members are too complicated to rely on data to analyze given situation. Love, hatred, trust, fear and so on frequently are much more important factors for taking a decision. Analyzing the data is very powerful tool; it helps “compensate for our overconfidence in our own intuitions”, it can reduce risks of taking the wrong direction, but there are moments when data suggestions should be considered as a secondary option.

       For example, data cannot  accept into consideration social norms of society, while human’s brain, on contrary, cannot cope with big arrays of simple calculations. Well, people invented computers, so there is no need for brain monotonously calculate outcomes or collect data, on the other hand, data-collecting computers, whatever potent they might be, cannot execute certain decision without human interference. I agree with author that in some situations human’s brain, its analytical abilities and common sense are only things that people need to act right. Human decisions in contrast to computer based analysis of the data, capable of acting nonlinearly: even for biggest  complicated problems brain could find simple and fast solution. This reminds me a novel by Phillip K. Dick “Do Androids Dream Of Electric Sheep” when empathy test is the only way to distinguish androids from humans. “Although androids can deal very well with the data analysis they don’t have empathic responses and thus are  considered machines”

      In “Big Data On Campus”, M.Parry describes a situation in several  U.S. higher education institutions like: Arizona State University, Rio Salado Community College or Austin Peay State University, where degree-monitoring systems suggest students which courses to take or which major to follow relying on student’s activity  on the Web.  Whatever students do online leave traces which are like “digital breadcrumbs”.  Computers analyze specific information,  making  e-advising software able to predict student’s grades, increase probability to get to the finish successfully with less efforts and so on. After all of the student’s activities are analyzed the verdict is made: Hey you, do you like chemistry and biology? Forget it, algorithm that predicted your possible future says that you have to consider wood carving instead! No wonder that for students who run off-track, the outcome could be regrettable. It could lead to dropping the major or taking more semesters to accomplish your degree. Despite of the convenience, these kinds of interventions into student’s life have yielded controversial results.

      Another problem is that data is tend to get accumulated, or ”creates bigger haystacks” without any resolving. It is getting more and more complicated for people to deal with massive amounts of information.  That is true that this way we can find more statistically significant correlations, but it doesn’t simplify original problem at all and make it more difficult for people, sometimes even stumping us.

      There are more points in the article that show weak sides of data analysis. Many of them merge into idea that data collecting techniques don’t take into consideration what kind of data do they actually collect. Quantitative characteristics are more important than quality of the data. Nevertheless, the “big data” is a great and useful tool, and people who consider themselves professionals, have to figure out the proper way of using it in particular situation.    

              

                                                                                  Works cited: 

B. R. Sachs. “Consumerism and Information Privacy”. Virginia Law Review. March 1, 20011. SSRN: http://ssrn.com/abstract=1373123

 

M. Parry. “Big Data On Campus”. New York Times. July 22, 2012. http://www.nytimes.com/2012/07/22/education/edlife/colleges-awakening-to-the-opportunities-of-data-mining.html

Posted in Uncategorized | Tagged , , | Leave a comment

Final draft Youcef Saidi

We make lots of decisions in life, from the minute we wake up to the moment we close our eyes; so how and what make us make these decisions ? According to David Brooks the decisions we make are either based on  our own intuitive, or based on a gathered data about that particular problem or situation.

I believe it is true that there are certain things data cannot do, and situations where data cannot be used, but the way David Brooks structured his articles was as if data should never be used in daily basis.  In his article “ The Philosophy of Data “ He made an interesting point that we all should be asking. He said, “ What kinds of events are predictable using statistical analysis and what sorts of events are not?”.

It is obvious that we rely a lot on technology and data nowadays, but we have to admit that data has helped human’s evolution tremendously. Now technology is better, faster and accessible; so why don’t we use it ?  Which bring us to the point Brooks stated : When should data be used ? Our intuitive plays a role in decision making, but how do we know which one is best to rely on ? data or intuitive?

Brooks stated few situations where data had been used to prove that intuitive was wrong. Such as the effectiveness of the money used in a political campaign, or recognizing  behavior based on the person’s verbal use. The only  problem was that Brooks did not use enough evidence to prove his point in the article.

In a certain situation, data can be used to change point of views, especially in politics. Political campaigns are important in every country; it determines the path the country is going to take for the next few years, depending on who is being elected. Therefore, politicians need to be careful of what they say, and be able to support their beliefs, opinions and ideas on how to make a change using facts and effective data to prove their point. Unfortunately, according to Sandra D. Andrews in her book “The Power Of Data” the given data and statistics are not always true. Some of them are accurate, but others are not; you cannot tell the difference because they are manipulated with the technology we have nowadays.

An example of political and data manipulation is  the political argument that is happening over gun control nowadays. In the article “ Never A Magic Bullet” by Ryan Somma, he said, “The Harvard Injury Control Research Center has a large number of studies correlating gun ownership with increased homicide, suicide, accidental firearm deaths, violent deaths to children, road rage, and other social ills; unfortunately, these studies were almost all conducted by the same small group of researchers.” I am not saying that these statistics are right or wrong, but people will not seek the truth. As long as they find a reason for or against their beliefs they will publish it regardless of its accuracy or approval.

To support my opinion that data is not always reliable and that they can be manipulated in certain times. In another article “ What can’t data do” Brooks shows the other side of using data, and some of the consequences of choosing emotionalism over ideology in decision making and the opposite as well. First, he started with an example about a bank In Italy when the C.E.O decided to use his sensitivity and intuitive to keep the bank during a European recession, which was not a smart move according to the given data. Then Brooks went on with comparing the human brain along with the “ machine “. The difference between the use of logic, emotions and machine (data) in social life and especially decision making.

For most parts,  I agree with brooks,  but I think he went a little too far with the idea of using technology in social basis. I believe that people are wise enough to know how and when technology should be used. Data Should be a tool to help us make wiser decisions, with the consideration of human’s emotions. If humans invented the machine that collects data, then they should also be able to know when to use it for efficiency and when to leave it aside.

Posted in Uncategorized | Tagged , , | Leave a comment

Final draft for Melecia Lee

Methods of Data Analysis

Broadly speaking, combining the traditional and computational methods of data analysis yield a well-rounded result.  In the traditional method, the emotional or human factors are captured whereas the algorithms from the computational method are limited in scope and give a measurement report omitting the emotional or social variables.  An analysis based solely on one of these methods leads to skewed results in the direction of the method.

I became interested in researching the approaches that are used in data analysis because of Brooks’ Banking Executive example that appears in his second article, “What Data Can’t Do”. This example has two interesting aspects. First, it expands on Brooks’ view from his first article, “The Philosophy of Data”, where he indicates that statistical analysis reveals new patterns that humans fail to notice.  Second, it shows that consideration of both the computational and human aspects leads to an informed decision making process. Hence, Brooks’ Banking Executive example highlights the important questions about interdependence of computational data analysis, human factors and real world decision making.

Brooks’ two articles explore data analysis from various angles. His first article leans towards the computational method and shows that ideals that humans hold dear often negatively impact the analysis of data. This is because these ideals simply may not reflect reality. Brooks proves this in his first article with the analysis from ‘Gilovich, Tversky and Vallone’, “that a player who has made six consecutive foul shots has the same chance of making his seventh as if he had missed the previous six”.  Hence, people’s intuition that a player can have hot and cold streaks in a game is incorrect. In Brooks’ second article, in the Bank Executive example, he vaguely mentions that the executive was not oblivious to the data from the computational analysis. Then, Brooks went on to stress the executive’s decision to “remain in the weak economy and ride out any potential crisis, even with short-term costs” and that this decision was based on the emotional and trust connection the Bank had established with the people of Italy.  It is this example that illuminates the power of decisions made when both computational and traditional methods of analysis work together.

One of the pluses of computational analysis is the ability to analyze large datasets quickly.  Stewart at al acknowledges this in the research when they said, “investigators can now rely upon alternative sources and techniques to corroborate information about public health events”.  However, can one rely on this result?  Researchers have found that more is needed to garner better results.  Stewart acknowledges that although the statistical pattern recognition algorithm was good for their research, it raises more questions about the quality of the variables used to determine the pattern. Hence the team has to extend their research to consult with “domain experts” Stewart (8).  Here, a team of researchers tackle a large dataset and are held back because one method of analysis did not give a comprehensive result.

In Lewis et al research, the analysis is questioned when large datasets are computationally analyzed, because software has its limitations. This research, dealing with analysis of data over the internet, shows that “non-traditional variations are needed to cope with the unique nature of the internet and its content” Lewis (35) while “algorithmic analysis of content remain limited in their capacity to understand latent meanings or the subtleties of human language” Lewis (35). Software is not able to interpret every human instinct, sudden emotional outburst or behavior that humans express in slang. Human social culture changes rapidly and makes analyzing data written on the web more complex, since the meaning of a slang word today can have a completely different meaning tomorrow.  This makes it more difficult to rely solely on the computational method.

 

Both Stewart et al and Lewis et al have agreed that the result of computational analysis is not fully reliable and can be skewed because of its limitations.  Stewart et al agrees her team has to do future work to offer a better analysis since the indicators used can change. Brooks’ second article also shows that analysis of large data sets is good and can help with decisions but relying solely on computational analysis omits key aspects gained from traditional methods.  Moreover, to rely solely on the traditional method is also not advisable as humans can incorporate their own feelings, culture, beliefs or prejudices that will skew the analysis.  Both computational and traditional methods of analysis allow for a well-rounded result and a more accurately informed decision making process.

 

Works Cited

Brooks, David. “The Philosophy of Data”. New York Times 4 Feb. 2013: A23. Web. 1 Mar. 2013. <http://www.nytimes.com/2013/02/05/opinion/brooks-the-philosophy-of-data.html>

Brooks, David. “What Data Can’t Do”. New York Times 18 Feb. 2013: A23. Web. 1 Mar. 2013. < http://www.nytimes.com/2013/02/19/opinion/brooks-what-data-cant-do.html>

Lewis, Zamith and Hermida. “Content Analysis and Big Data”.  Journal of Broadcasting & Electronic Media Mar. 2013: p34-p52. Web. 15 Mar 2013.

Stewart, Fisichella, Denecke. “Detecting Public Health Indicators from the Web for Epidemic Intelligence”. 2010. Web. 15 Mar 2013. <http://www.l3s.de/web/upload/documents/1/paper44_Stewart.pdf>

Posted in Uncategorized | Tagged , , | Leave a comment

Final Draft

Humans, we are set apart from other species because our brain is much more efficient at recognizing patterns. Everything we see in our everyday lives, we subconsciously analyze to find patterns or some kind of trend; even when we don’t notice that we are. For example, let’s say that an individual has had a series of bad experience involving a pool or heights; that person will at times develop a sort of defense mechanism in the form of a phobia. This is probably because statistically speaking, this person might have recognized that their chances of a poor outcome given the first input being a pool or heights are higher than accepted. I personally do agree with the article on most cases; however I do think that the future is unpredictable at times and can have spikes or distortions in pattern without notice.

The chaos theory is based on this, it basically states that at any given time in any experiment, even those involving numbers can be completely unpredictable. We humans like to relate the unknown to chaos. We like to predict anything and everything and we do so by creating statistical  data that highlights to us the trend. I enjoyed Brooks’s comment on the 2012 election campaign tactics by the Obama administration. Although I couldn’t care less about either Romney or Obama , It is irrefutable that vilification and elections go hand in hand. The reason as to why the strawman attempt by the administration did not work, in my opinion is because Romney was doing a good enough job isolating himself from the voters. Ron Paul was basically an unknown entity in respect to politics, not because he was new; but because he had no exposure by media, because of the focus on the so called red and blue party. Since we’re speaking about trends here, it would be likely for me to mention that data suggests that mostly all of our presidents have been related to one another in some way. It shocks people when I tell them that Obama and Bush are actually cousins. Bottom line to me is this: data can help us, but only to a certain degree.

A company can be having a great year and could be bankrupt the next due to some unpredictable events. Humans obsessions with keeping records of everything and decoding it meticulously in attempts of finding a pattern is an old one. It is unarguably solid but subject to spikes and distortions just as anything else is. Brooks second article was particularly enjoyable. He speaks of how data cannot take into account the human factor of emotion and he is right. Emotions themselves are unpredictable thus making results produced by emotions ultimately…unpredictable. Brooks writes about a struggling company who in all good sense should have  pulled out while they had the chance, but due to whatever reason chose to stay in and ride out the storm. In this case it worked out favorably to the company, but that’s not to say that if this experiment was repeated that it would have produced the same result. Ultimately the point being made here is that data can only tell a person so much. such is the case in politics.

Probability and statistics play a major role in politics. Actually, the counting of votes is basically statistically employing a person to be in office. Whenever one watches a news channel during election times, they are bombarded with numbers, percentages and likelihood of winning or losing. Almost as if it was less of an election and more of an off-track betting facility. A campaign consists of planning, information gathering and meticulous dissection of state by state, variation of opinions based on the reality and mindset of the people which inhabit that state.  New York for example is 60 percent democrat, 33 percent republican and the rest go to liberals and conservatives. So, with that information in hand, a republican candidate will know that in order to tip the scale in his or her favor, they will have to appease to the overwhelming democratic population of the state. Having nearly a 30 percent chance at anything is almost never a good thing. This will lead to a candidate spending more money in these kinds of areas, in which they know that they must work hard to get a vote.

A candidate makes his or her appeal to the voters because the voters are the ones who cast ballots. Consider the following groups of people: Adults, registered voters and likely voters.To discern the mood of the public any of these groups may be sampled. However, if the intent of the poll is to predict the winner of an election, the sample should be comprised of likely voters.The political composition of the sample sometimes plays a role in interpreting poll results. A sample comprised of entirely of registered Republicans would not be desirable if someone wanted to ask a question about the electorate at large. Since the electorate rarely breaks into 50% registered Republicans and 50% registered Democrats, even this type of sample may not be the best to use.

Abortion has been a key factor in the last 3 presidential elections. Candidates are careful to not give a definite answer at times, because they know that, in another part of the country, the sample populations opinion will vary from their own. I believe I must further explain this word that has been used in this essay, sampling. Sampling is taking a small portion of something rather large and making a prediction as to how the remainder of the portion will react based on the result of the tested or “sampled” subjects. This can work at times; however, it may also backfire. let’s take into consideration, the previous presidential election between Barack Obama and Mitt Romney. Mitt Romney lacked support of the latino community heading into the election. Obama on the flip-side had the majority of votes from the latino community. One of the many mistakes made by the Romney campaign was undermining the votes being lost to Obama because of this. Romney’s campaign crew sampled incorrectly. Admittedly they claimed that their numbers had predicted something completely different. A poll with a larger sample size is not necessarily the better poll. On the other hand, a sample size may be too small to state anything meaningful about public opinion. A random sample of 20 likely voters is too small to determine the direction that the entire U.S. population is leaning on an issue. Associated with the size of the sample is the margin of error. The larger the sample size, the smaller the margin of error. Surprisingly, sample sizes as small as 1000 to 2000 are typically used for polls such as Presidential approval, whose margin of error is within a couple of percentage points. The margin of error could be made

as small as desired by using a larger sample, however this would require a higher cost to conduct the poll.

       “ Statistics had it’s origins in Politics”-  The American Statistician. Brooks may or may not agree with that statement by Philip M. Hauser. One fact remains irrefutable: Probability and statistics, although a vital part in Politics, are only as good as the statisticians behind the number crunching. Even then there is “noise” and distortions in the expected results. This is because not everything can be tied to a number; such is human emotions and sometimes…just plain old luck. It is unwise to bet the farm on odds that are not in your favor, but that is un-American. We Americans, love nothing more than playing with chance and finding a way of beating the odds. When it comes to Politics, although it may have a darker side essentially, the  name of the game is still probability.

Sources:

Philip M. Hauser

The American Statistician

Vol. 27, No. 2 (Apr., 1973), pp. 68-71


http://en.wikipedia.org/wiki/Social_statistics


http://scienceblogs.com/goodmath/2006/06/28/skewing-statistics-for-politic/


The Philosophy of Data

By DAVID BROOKS
Published: February 4, 2013


What Data Can’t Do

By DAVID BROOKS
Published: February 18, 2013

Other sources include various websites (+15)

Posted in Uncategorized | Leave a comment

E_Bremner_Final_Writing_Assignment

Normal
0

false
false
false

EN-US
X-NONE
X-NONE

/* Style Definitions */
table.MsoNormalTable
{mso-style-name:”Table Normal”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-parent:””;
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin:0in;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:10.0pt;
font-family:”Times New Roman”,”serif”;}

Errol Bremner                                                           Mat 1372: Statistics with Probability Professor Ezra Halleck                                             Spring 2013

 

 How statistics is used in today’s world.

This response is to two David Brooks’ columns from the New York Times, entitled “The Philosophy of Data[1] and “What Data Can’t Do[2] from February 4th and 18th, respectively. 

       In His article “The philosophy of Data”, David Brooks states everything is measurable or quantifiable and that data is transparent and reliable so much so that it allows emotionalism and ideology to be filtered out. He asked what kind of events are we able to analyze using statistical analysis and what sorts we cannot. He wondered when we should rely on intuition and when we should use data.

Hot streak or cold streak anyone?

       According to Brooks, there are no hot streaks or cold streaks only cold hard data. He cited; Thomas Gilovich, Amos Tversky and Robert Vallone’s research. They found “that a player who has made six consecutive foul shots has the same chance of making his seventh as if he had missed the previous six foul shots.”Mr. Brooks further backed up his notion with the idea that “more campaign money does not guarantee political victory”. He said that, similarly, nearly every person who runs for political office has an intuitive sense that they can powerfully influence their odds of winning the election if they can just raise and spend more money. But this, too, is largely wrong.

What!  My Intuition is wrong?       

         David Brooks said in his NYT article;The Philosophy of Data” from February 4th 2013 that teachers imagine they will improve outcomes if they tailor their presentations to each student learning style. But, there was no evidence to support this either.

Data can illuminate unnoticed behavioral Patterns!

          Brooks’ thought that people who frequently use personal pronouns were likely to be more egotistical than people who don’t. But Professor James Pennebaker of the University of Texas proved Brooks wrong. According to Pennebaker’s book, “The Secret Life of Pronouns,” when people are feeling confident, they are focused on the task at hand, not on themselves. High status, confident people use fewer “I” words, not more.

Brooks on Pennebaker.

          Brooks cited another observation on Pennebaker’s work.  He said that our brains often don’t notice subtle verbal patterns, but Pennebaker’s computers can. Brooks further said that, younger writers use more downbeat and past-tense words than older writers who use more positive and future-tense words.

The limitation of data analysis.

David Brooks in his article “What Data Can’t Do” from February 18th  2013, told an interesting story of an Italian banker after gathering all the data still  chose to stay in Italy even though the economy was weak and there was the  prospect of a future euro crisis.

 

 

 

Bremner on Brooks.

Mr. Brooks literally kicked the apple cart over for me where data analysis is concerned. And there I was thinking that those six golds  that   I scored in high soccer final was a hot streak. Ha!Ha!

Seriously though, after reading David Brooks’ article “What Data Can’t Do” from February 18th  2013,

Specifically,  Data favors memes over masterpieces.

 Brooks’ said that Data analysis can detect when large numbers of people take an instant liking to some cultural product. But many important (and profitable) products are hated initially because they are unfamiliar.

I decided that I would like to check why Beta max lost to VHS video tape. I keep hearing that Betamax was a better product. Also, I want to do some research on the correlation between learning styles and mode of teaching or the way the data is presented.  

*A personal note : Pennebaker said that;  younger writers use more downbeat and past-tense words than older writers who use more positive and future-tense words. I agree with Mr. Pennebaker, as a writer I have always felt comfortable writing in the past tense. I have never known why until now. I always thought that because I came from a foreign country was the reason. Now, I know!

Betamax versus VHS.

In his 2008 article, “ The Betamax vs. VHS Format War[3] Dave Owen from Media college.com stated that  Sony’s Betamax video standard was introduced in 1975, followed a year later by JVC’s VHS. For around a decade the two standards battled for dominance, with VHS eventually emerging as the winner.

Owen said that VHS did not win because it was a superior product but was influence by several factors. Clever marketing, licensing problem for Sony and recording length of the tapes were cited as a few.

In any case, the Manufacturers combatants were divided into two camps:

1. On the Betamax side were Sony, Toshiba, Sanyo, NEC, Aiwa,   and Pioneer.

2. On the VHS side were JVC, Matsushita (Panasonic), Hitachi, Mitsubishi, Sharp, and Akai.

From the consumer point of view, the difference between the two formats was the recording length. Standard Betamax tapes lasted 60 minutes [not long enough to record a movie] . While in contrast, the VHS tapes [3 hours] were perfect for recording television programs and movies. Sony in time did offer and adapt a variety of solutions for longer recording to no avail because it was too late. This issue is often cited as the most defining factor in the war.

Some commentators claims that pornography might have been a deciding; in that Sony would not allow it to be recorded on Betamax while the VHS group allowed it on their product. Mr. Owen stated that in researching this article I was unable to find any substantiated evidence that pornography sales significantly influenced the outcome of the war.

By the end of the 1980s the dust settled and VHS had won.

The {late} last Betamax machine in the world was produced in Japan in 2002.

 

 

 

 

 

Work cited:

 [1]. http://www.nytimes.com/2013/02/05/opinion/brooks-the-philosophy-of-data.html

 [2].  http://www.nytimes.com/2013/02/19/opinion/brooks-what-data-cant-do.html

 [3].   http://www.mediacollege.com/video/format/compare/betamax-vhs.html

 

 

 

 

Posted in Uncategorized | Tagged , , , , | Leave a comment

Jonathan Ciabotaru

final research

Posted in Uncategorized | Leave a comment