Wednesday, May 27, 2015

Loopholes and Star Trek’s Moneyless Society

A recent paper in the "Futuring and Innovation" class asked us to write about a sociotechnical plan in a real or fictional organization. As I was reading the definition of "sociotechnical" (http://en.wikipedia.org/wiki/Sociotechnical_system) the first thing that came to mind was how ST:TNG mentioned a few times that they don't use money. As a matter of fact, I remember an episode (http://en.memory-alpha.wikia.com/wiki/The_Neutral_Zone_(episode)) where The Enterprise picked up some people that had been in suspended animation for a few hundred years, one of which was a "Type-A" businessman that couldn't accept the way society and business had changed in the intervening years.
http://en.memory-alpha.wikia.com/wiki/The_Neutral_Zone_(episode)?
file=Offenhouse_on_bridge.jpg
 
To quickly summarize, my question to myself was, "If money no longer existed, meaning basic needs of housing, food, clothing, etc. are provided, then what is the motivation to get up and go to work in the morning?" Ideally, people would be free to pursue whatever interests them rather than whatever has the biggest paycheck (although admittedly sometimes those two intersect). I then went on to analyze human motivational systems: pain avoidance vs. reward seeking, but that's not the point of this blog entry.
 
The purpose of this blog is to discuss organizations had good plans but for reasons out of their control, it all went wrong. In "Best Laid Plans," William Sherden (2011) devotes Chapter 7 to "Perverse Adaptations," stories of how plans with good intentions were somehow subverted, often causing more harm than the original situation the plan was supposed to improve.

For example, In 1975 the Energy Policy Conservation Act was passed in response to the 1973 oil embargoes. This legislation mandated that automobiles would have to get a minimum 27.5 mpg while trucks would have to get 21 mpg. Sherden explains that prior to the act a variety of cars with lower mileage were produced, including the station wagon but with the passing of the act, those cars went off the market. To fill the gap for a family cargo-hauling car, manufacturers invented the SUV which, incidentally, was built on a truck chassis allowing it to use the 21 mpg standard. As SUVs became more popular, consumers switched from more fuel-efficient cars to SUVs, thus actually using more fuel than before.

What loopholes might exist in Star Trek's moneyless society? In a world where a replicator can make food, clothing, furnishings, gadgets, etc., the only things that would have value would be handmade goods but what is my motivation for making pottery coffee mugs or over-the-sofa paintings if I don't get paid for it? It seems like some sort of barter system would have to exist - handmade or rare goods for handmade or rare goods.
 
But what if I don't produce anything handmade? Would I be destined to always have "cheap" replicated goods? Or would there exist some form of black market currency to fill the gap? Perhaps that is the purpose served by the "gold-pressed latinum" the ferengis lusted after.

Given humans' competitive nature coupled with their desire for status (humorous proof here: http://www.sciencechannel.com/tv-shows/outrageous-acts-of-psych/outrageous-acts-of-psych-videos/vip-bus-experience/) it seems a black market barter system would be a loophole that is certainly exploited.

Reference:

Reference:
Sherden, W. A. (2011). Best laid plans: The tyranny of unintended consequences and how to avoid them. ABC-CLIO.

Monday, May 18, 2015

Go Car, Go

This is a continuation of the autonomous car theme. I've created an animated slideshow that discusses some of the benefits.


For more information, see my previous post: Prediction Analysis: Autonomous Cars in Vancouver.

Thursday, May 14, 2015

Prediction Analysis: Autonomous Cars in Vancouver

Every time I read about autonomous cars - the Google mapping car, for example - or the 2011 Nevada legislation authorizing autonomous cars I think back to watching The Jetson's as a kid and all the predictions about flying cars. I _still_ want one, dammit!

However, according to a report last February to the Victoria Transport Policy Institute, autonomous cars may be a closer reality than flying cars  (Litman, 2015). The report exhaustively examines production timelines, estimates public acceptance, and the pros and cons of four different levels of autonomy in cars.

Briefly:

Level 1 - Function Specific Automation: Things like cruise control, lane guidance and hands-free parallel parking. Many higher end cars already have these features implemented.

Level 2 - Combined Function Automation: Multiple integrated functions like cruise control with lane centering. Under certain conditions the driver can be hands- and feet-off pedals but must be aware and able to take control.

Level 3 - Limited Self-Driving Automation: Driver can rely on car for all functions, including safety, under certain conditions and is not expected to monitor the road all the time but should be able to take over control if needed.

Level 4 - Full Self-Driving Automation: Suitable for non-drivers, the vehicle monitors all road conditions with no expectation of passengers being able to assume control.

Obviously, the different levels have different pros and cons as well as progressively later predicted implementation dates.

Although some car manufacturers predict having Level 3 cars ready for mass market by 2018, reality is that the feature will most likely be expensive and take a considerable amount of time to gain widespread public approval and adoption. The report lists various automotive innovations and their time to widespread adoption as guidance for the timeline. For example, automatic transmissions were developed in the 1950s but weren't really widely adopted until the 1990s, and hybrid vehicles have only about a 4% market saturation despite being on the market for 25+ years.

Some of the advantages that are analyzed include increased mobility for non-drivers such as the elderly or disabled, the ability for individual drivers to rest or work on long commutes which also promotes housing farther away from the place of employment (which may have lower property values, better schools, etc.), potential fuel or insurance savings, and best of all, cars that can drop off the passengers and then park themselves.That final benefit plus the ability to read or work during my commute would make the feature worth quite a bit to me.

Sadly, the report cites a recent poll that revealed general support for autonomous cars few respondents would want to pay for a fully autonomous features and expressed concerns over safety and privacy.

Maybe they are another flying car, after all.

References
Litman, Todd, author. (2015). Autonomous vehicle implementation predictions: Implications for transport planning. Retrieved from Victoria Transport Policy Institute website: http://www.vtpi.org/avip.pdf

Monday, May 11, 2015

A Lack of Planning On Your Part...

...Sometimes does cause an emergency on my part.

Continuing in our vein of forecasting and scenario planning, today I'm going to look at the unintended consequences of the U.S. Energy Policy Act of 2005 which required the addition of ethanol to gasoline to reduce fossil fuel dependence and promote cleaner air. According to William A. Sherden in his book "Best Laid Plans: The Tyranny of Unintended Consequences and How to Avoid Them," (2011) the 2005 Act called for the use of 4 billion gallons of ethanol in 2007 ramping up to 35 billion gallons by 2017 (only 2 years from now, BTW).

On the surface, of course, the 2005 Act looks to be a step toward breaking us free from non-renewable resources and moving toward renewable ones such as biomass fermentation -- a direction we as a species MUST go.

Unfortunately, as seems to happen all too often with political initiatives, someone didn't do their science. David Pimentel of Cornell University, however, did do the science and shows us that there is no energy benefit to using biomass-based liquid fuel and the strategy is not sustainable. Let's look why.

U.S. ethanol is primarily produced from corn - a crop that is also used as food for both humans and animals. Growing the corn requires fuel for planting, cultivating, and harvesting the fields as well as transporting the finished product (pipelines tend to contaminate the ethanol with water so it must be shipped via truck/train). Also, many of the pesticides and fertilizers used on the corn have a petroleum base, and don't forget about the heat needed for distillation. As a matter of fact, Pimental estimated that ethanol consumes 29% more energy than it creates and only accounts for about 1% of energy production (Sheridan, 2011).

Source: http://www.afdc.energy.gov/fuels/ethanol_production.html
Perhaps worse though are the unintended environmental side-effects. The original goal of offsetting CO2 emissions from automobile exhaust have been cancelled out by CO2 produced growing, refining, and transporting the ethanol and increased corn production in the U.S. Midwest has lead to greater fertilizer runoff, encouraging algae that deplete oxygen and kill fish and other wildlife at the mouth of the Mississippi in the Gulf of Mexico.

Perhaps even worse is the impact to worldwide food production. According to James Conca in Forbes article "It's Final -- Corn Ethanol Is Of No Use" in the year 2000 90% of corn produced in the U.S. went to feeding livestock and people --  many of which were in impoverished areas. However, in 2013 40% of corn production went to ethanol and 45% to livestock leaving just 15% of corn production going to humans. Couple the facts above with the facts that the U.S. produces 40% of worldwide corn and 70% of worldwide corn imports come from the U.S. and we can begin to see the full picture of ethanol's impact on world food supplies.

Hindsight is 20/20 as they say but one has to wonder that none of these effects were uncovered in the pre-Energy Policy Act of 2005 analysis

Reference:

Sherden, W. A. (2011). Best laid plans: The tyranny of unintended consequences and how to avoid them. ABC-CLIO.

Conca, J. (2014, April 20). It's Final -- Corn Ethanol Is Of No Use. Retrieved from http://www.forbes.com/sites/jamesconca/2014/04/20/its-final-corn-ethanol-is-of-no-use/2/

Wednesday, May 6, 2015

Forecasting vs. Scenario Planning

Forecasting:

Investopedia (http://www.investopedia.com/terms/f/forecasting.asp) defines forecasting as "The use of historic data to determine the direction of future trends." It goes on to further explain businesses use forecasting to determine yearly budgets based on demand for goods versus cost of production and investors use forecasting to predict share price based on events affecting the company, such as sales expectations.

In a related article of forecasting methods, Investopedia (http://www.investopedia.com/articles/financial-theory/11/basics-business-forcasting.asp) tells us there are Qualitative and Quantitative Models. Qualitative Models depend upon expert opinion (including Delphi Method) and is successful in the short term. Quantitative Models discount expert opinion and focus entirely on data and attempt to take the human element out of the equation.


Pros:
(from http://www.brighthub.com/office/entrepreneurs/articles/109618.aspx)

  • Helps predict the future and keeps companies future-focused.
  • Learn from the past.
  • Reduce inventory while maintaining proper output of goods



Cons:

  • Rigid planning based on a single future with no ability to adapt to unforeseen events.
  • Data is always old and there is no guarantee the future will act/react the same way as in the past.
  • Business is influenced by their forecasts but forecasts can't predict their own impact.

Scenario Planning:

(from http://en.wikipedia.org/wiki/Scenario_planning#Scenario_planning_compared_to_other_techniques)
A method of strategic planning that uses known facts about the future such as demographics, political, industrial, and economic information. Originally scenario planning was used by the military to create simulation games for policy makers. In the business world Scenario Planning has been adapted away from simulating opponent behavior toward a game against nature.

Pros:

  • Provides alternate perspectives on the future.
  • After several iterations, scenarios will become frameworks for dealing with the alternate futures.
  • Provides a method for dealing with diverse or conflicting collections of data.
  • Iterative method with built-in consistency checks.


Cons:

  • Little to no academic acceptance or research.
  • Subjectivity
    • Tendency to favor one particular scenario vs. consider all of them.
    • Tendency to take scenarios too literally, instead of using as a flexible tool to bound the future.
  • Organizational limitations
    • Team composition
    • Facilitator role
    • Focus (long vs. short term, global vs. regional, etc.)
  • Weak integration into other planning techniques.
    • Budgeting and planning based on one future.

 

Monday, May 4, 2015

Learning Analytics: Applying Big Data to Education

Click image to open original
Big business has relied on big data analytics for several years now for a variety of purposes - perhaps the most impactful to business is rapid turnaround on marketing programs but consumers will be more familiar with the Amazon and Netflix recommendation engines that suggest other products to view based on past customers' viewing and purchasing patterns.

That same technology is making its way into education in the form of "Learning Analytics" as described in the infographic from Open Colleges at right. The New Media Consortium looks to developing and future trends in technology use and adoption in education. In their paper titled "The NMC Horizon Report: 2014 Higher Education Edition" the group identified Learning Analytics as a rapidly developing technology in the "One year or less" category for time to adoption. 

Much like other applications of big data, Learning Analytics offers the promise of "individualized education." Through the use of visualization and dashboard tools, administrators and educators will have access to an unprecedented amount of information about individual students, student trends, classes as a whole, etc. They will be able to compare individual behaviors (time spent on education and class related sites, frequency of checking message boards, etc.) with overall behavioral trends to identify students at risk for bad grades or dropping out as well as be able to make recommendations based on trends displayed by high achievers, as one example.  

The NMC report mentions several university programs already in place to evaluate the benefits of big data use in education. Eastern Connecticut State University began a five-year initiative to improve the success of low-income, minority, and first generation students using big data analytics and the University of Wisconsin began a program in 2013 to match behavior patterns to students with low grades. Stanford University is analyzing large datasets generated from online learning resources to build an educational dashboard and in 2013 they received a $200,000 grant from the Bill and Melinda Gates Foundation to support Stanford's Learning Analytics Summer Institute to provide training to researchers.

In engineering (yes, even software engineering!) you often hear phrases like "You can't improve what you don't measure." However I would also claim that the raw data itself - the measurement - does no good without the analysis to convert the data into information and the information to knowledge. Emerging big data analytics applications like Learning Analytics are providing us with the analysis tools to make that conversion.

Reference:


Johnson, L., Adams Becker, S., Estrada, V., Freeman, A. (2014). NMC Horizon Report: 2014 Higher Education Edition. Austin, Texas: The New Media Consortium.

Pinantoan, A. (2012). Learning analytics 101 [Infographic]. Retrieved from http://www.opencolleges.edu.au/informed/learning-analytics-infographic/

Monday, April 27, 2015

Alan Turing - Accidental Inventor of the Stored-Program Computer

Intro

Humans have been inventing better, faster, and more accurate ways of doing computation for literally thousands of years. Examples of this idea include Archimedes' Antikythera Mechanism, considered the first analog computer (Wallis, 2008) from about 100 B.C. to Charles Babbage's Difference Engine in 1822 ("Difference engine - Wikipedia, the free encyclopedia," 2015), to the "First Computer Program" Lady Ada Lovelace devised to run on the Difference Engine ("Ada Lovelace - Wikipedia, the free encyclopedia," 2015), and Hollerith's punched cards for the 1890 census ("Punched card - Wikipedia, the free encyclopedia," 2015).
Among the prestigious company mentioned above, a name that cannot go unmentioned is Alan Turing. Recently, Dr. Turing has received a royal pardon and other much-deserved publicity and he is well known in Computer Science circles for invention of his "Turing Test" for Artificial Intelligence. While Turing is widely recognized for his contributions to cryptography in World War II, what is sometimes overlooked is his accidental contribution to general computer science and the impact it made on the war effort at the time.

From Theoretical Mathematics to Computers

Originally, Dr. Turing was writing a paper on mathematics called "On Computable Numbers, with an Application to the Entscheidungsproblem" (Turing, 1937, p. 230). In summary, Dr. Turing was attempting to answer the question, does there exist an algorithm that can prove every mathematical statement, no matter what it is, true or false? This, then, is the essence of the Entscheidungsproblem, or Decision Problem (Weisstein, n.d.).

To help answer this question, Dr. Turing described a computing machine capable of reading and writing "symbols" to a paper tape. This machine could be programmed with a set of internal rules before starting to read the tape and the combination of the previously read symbol and the configuration would constitute a new configuration - essentially equivalent to the machine state.
Eventually, this like of thought lead Dr. Turing to ask if the computer is capable of all possible computations, then is it possible for one computer to read another computer's code and determine if it will halt or go on forever. Turing was able to prove that it was not possible to create a program to read another program and determine if it would halt and then used this fact to prove the original question - that there is no way to always say if a mathematical statement is provable (Campbell, n.d.).

From Paper to Big Iron

Perhaps more important than the proof in Dr. Turing's paper was the almost off-handed mention of storing "symbols" on paper tape which can be read by the computer and will affect its state. It doesn't take much imagination to realize that this is the conceptual model for a stored-program computer.
While it is probable that the stored-program computer would have come about as a natural evolution of humanity's  quest for mechanical calculating machines however we will never know. Shortly after Turing's paper was published the world was plunged into a war that found several countries clamoring for better and faster calculations.
Mathematician John Von Neumann is generally credited with invention of an early computer architecture that, among other things used the stored-program concept ("Von Neumann Architecture") for his work on the EDVAC computer and the associated paper published in 1945 "First Draft of a Report on the EDVAC." Von Neumann was working on The Manhattan Project at the Los Alamos National Laboratory - a computationally bound project. It is unknown whether Von Neumann was aware of Turing's paper but it is worth noting that Von Neumann was at Cambridge in 1935 and Princeton from 1936-1937 at the same time as Turing and the two were, in fact, acquainted ("Von Neumann architecture - Wikipedia, the free encyclopedia," 2015).
Prior to the EDVAC, the ENIAC computer was the state of the art in computation however it had a few drawbacks. Namely, it had no stored-program memory. Programming was quite literally done by the error-prone process of rewiring the circuitry and could take three weeks to program ("Von Neumann architecture - Wikipedia, the free encyclopedia," 2015).
At about the same time, Turing was working for England's Bletchley Park, site of the Government Code & Cipher School. He created the Bombe electromechanical machine that helped find German Enigma machine settings but was not a calculation device. It was more of a brute-force decryption device, having several Enigma machines wired together ("Bombe - Wikipedia, the free encyclopedia," 2015).
More importantly to the topic at hand, Turing also contributed to the Colossus project - a top-secret stored-program computer used by Bletchley Park for decryption of the Lorenz cipher (Copeland, 2006). Although the Colossus was designed by an engineer named Tommy Flowers with Turing's work on probability being incorporated in the design. Ironically, due to this work, Turing knew that the stored-program concept was feasible before the Von Neumann paper but was not allowed to reveal this fact due to the secretive nature of the Colossus project. 

References:


Ada Lovelace - Wikipedia, the free encyclopedia. (2015, April 22). Retrieved April 27, 2015, from http://en.wikipedia.org/wiki/Ada_Lovelace#First_computer_program
Bletchley Park - Wikipedia, the free encyclopedia. (2015, April 12). Retrieved April 27, 2015, from http://en.wikipedia.org/wiki/Bletchley_Park
Bombe - Wikipedia, the free encyclopedia. (2015, April 13). Retrieved April 27, 2015, from http://en.wikipedia.org/wiki/Bombe
Campbell, M. (n.d.). New Scientist TV: See how Turing accidentally invented the computer [Web log post]. Retrieved from http://www.newscientist.com/blogs/nstv/2012/06/how-turing-accidentally-invented-the-computer.html
Colossus computer - Wikipedia, the free encyclopedia. (2015, April 27). Retrieved April 27, 2015, from http://en.wikipedia.org/wiki/Colossus_computer
Copeland, B. J. (2006). Colossus: The secrets of Bletchley Park's codebreaking computers. Oxford: Oxford University Press.
Difference engine - Wikipedia, the free encyclopedia. (2015, February 15). Retrieved April 27, 2015, from http://en.wikipedia.org/wiki/Difference_engine
Punched card - Wikipedia, the free encyclopedia. (2015, April 9). Retrieved April 27, 2015, from http://en.wikipedia.org/wiki/Punched_card
Turing, A. M. (1937). On Computable Numbers, with an Application to the Entscheidungsproblem. Proceedings of The London Mathematical Society. doi:10.1112/plms/s2-42.1.230
Von Neumann architecture - Wikipedia, the free encyclopedia. (2015, April 12). Retrieved April 27, 2015, from http://en.wikipedia.org/wiki/Von_Neumann_architecture
Wallis, P. (2008, July 31). Ancient Greek computer from 100 B.C.: Archimedes strikes again? Retrieved from http://www.digitaljournal.com/article/258045
Weisstein, E. W. (n.d.). Decision Problem -- from Wolfram MathWorld. Retrieved from http://mathworld.wolfram.com/DecisionProblem.html


Monday, April 20, 2015

Back in the Swing of Things

This quarter I have a class called "Futuring and Innovation" that encourages us to create and write in a blog and I realize now that it's been quite a while since I last updated this blog.  My original intent of course was to write about topics that interested me as I made my way through this Doctoral degree but as seems to happen I got busy with classes and research and have not updated as often as I would have liked.

So, in this new spirit of blogging, I'm going to discuss a video that I found entitled  “Big Data is Better Data” by Kenneth Cukeir that touches on one of the topics in my dissertation. Feel free to watch the video below.


Mr. Cukier is of course discussing the innovation of big data. The buzzword "big data" is now a little bit passé however for those been living under a rock for the last couple of years the quick summary of the topic:  big data refers to our ability to process information in ways previously impossible. recent advances in storage allow us to collect and keep more data than ever before and some recent advancements in computer algorithms, Namely the MapReduce algorithm made famous by Google in 2008 (Dean & Ghemawat, 2008), give us the ability to process larger amounts of data in smaller amounts time. Another unique factor is that this new processing ability can't handle unstructured data, Such as pictures or videos or sound files, whereas in the past data processing required some form of structure such as you would find in a database.

Mr. Cukier eloquently discusses some of the pros and cons of big data analytics (meaning the ability to turn the data into some sort of useful knowledge). He makes a compelling case for the collection and usage of the data and in that same vein I had a few thoughts on the matter myself. 

The Good: We have become used to big data being used in our everyday lives in fairly benign ways - Amazon and Neflix recommendation engines are just two examples we take for granted. However, some possibly more life changing uses than suggesting you watch “Orange is the Next Black” or recommending shoes to go with that skirt, big data analysis is being used by the medical community to help find cures for cancer and even tailor drugs and therapies to individual patients, research to develop drought-resistant strains of rice, and the public sector to evaluate air quality and where to allocate police resources.
The Bad: There is a now-legendary story of big data analysis attempting to produce influenza outbreaks based on social media interaction that went awry (just because people are talking about the flu doesn’t mean they have it). Shaw (2014) tells of a big data researcher attempting to use cell phone data in Africa to predict peoples’ movements who thought at first to have found a predictor for cholera outbreaks when what he really found was confirmation of local flooding. Stories such as this serve to remind us that correlation does not equal causation.
The Ugly: Perhaps the most famous recent example of the misuse of big data comes from stories of the NSA collecting phone records and other data for the purpose of spying on American residents. These types of stories help us to understand the fragility of privacy. When so much data from so many sources is available, easy is it to pierce the think veil of privacy?


References:

Dean, J., & Ghemawat, S. (2008). MapReduce : Simplified Data Processing on Large Clusters. Communications of the ACM, 51(1), 1–13. doi:10.1145/1327452.1327492

Shaw, J. (2014, March). Why Big Data is a Big Deal. Harvard Magazine, 116(4), 30-35. Retrieved from http://harvardmagazine.com/2014/03/why-big-data-is-a-big-deal