Review: Drive: The Surprising Truth About What Motivates Us

Drive: The Surprising Truth About What Motivates Us
Drive: The Surprising Truth About What Motivates Us by Daniel H. Pink
My rating: 5 of 5 stars

A few days ago I tweeted,

How bad would it be for me to anonymously leave a copy of @DanielPink ‘s book Drive on the desk of every exec[utive] at work?

First, I actually think every person supervising others and even those working in our flat teams should study and implement this. The good news is I already see hints of it in the work place nestled in the cracks. Knowing why these behaviors improve performance and taking it to the next level is the dream. We have superstar teams and this is why. Second, ever since I watched the Pink’s TED Talk and RSA videos, these ideas are things I mention. The book just adds more fuel to the fire.

This is a easy read. The appendix contains a summary of how to apply these ideas as an individual, an organization, or as an educator. And the bibliography gives me the changes to dive even deeper.

View all my reviews

Motivation 3.0

A recent event reminded me I should read Daniel Pink’s book Drive: The Surprising Truth About What Motivates Us. I picked it up in August to read, but since my copy is a hard back the Georgia heat would warp it, so I left it forgotten in the bedside table. So here I am, thoroughly enjoying it.

My 2009 post TED Talk: Dan Pink on the surprising science of motivation is about his discussion of the ideas covered in the book. It one of my favorite all time TED Talks. RSA produced an animated video for a similar talk on the same topic.

Rewards improve performance for mechanical tasks. They malfunction when the tasks require rudimentary thinking. These extrinsic motivators are what Pink calls Motivation 2.0. We need to look at Motivation 3.0 where intrinsic motivators drive performance. They are:

    • Autonomy – urge to direct our own lives
    • Mastery – urge to get better and better at something that matters
    • Purpose – urge to do participate in something larger than themselves

Recently I lamented about how I may have profited from Specialist Culture, employees who are technically gifted or great in their fields don’t have to consider how their behaviour or work affects anyone. (Source: The Toxic Workplace) The benefit of being considered an expert in a rock star team? We suffer less compliance and receive more autonomy so we can self direct ourselves to mastery and take on the projects that give us purpose. I realized for most of my career I have had great amounts of autonomy. Supervisors pointed me at the problem, provided a vision of the end result, and let me go at it. That is a tremendous trust even for a 19 year old that I guess I earned. (Surprising.) Also, these supervisors provided me valuable instant feedback on my work.

Perhaps the history of being treated this way is why I treated the student assistants I supervised this way. Also, losing autonomy at my prior position and the way that frustrated me was a huge factor in my being poached away to my current position. Anyway, this stuff will continue to be a part of my thinking both in how bosses treat me but especially how I work with teams. An interesting question is how to arrive at more areas of the organization to achieve the same?

Unintended Consequence of Ads

My Internet Service Provider spams me about deals. Requests not to receive phone calls or emails have no effect. (I love Google Voice because I have their number on a no ring list for their robocalls.) They send emails weekly about deals I should take to pay them more than I am. Usually I delete the emails without thought. However, when I am trying to use it and the web mail takes three minutes to load like every I accessed recently on the Internet, this email about a deal makes me think…

If I stop paying you anything, then that is the best deal of all.

Not sure if this is fortunate or unfortunate, I try not make decisions when frustrated. That negative emotional state leads me to attentional bias to predict that if I stay, then constant poor performance will annoy me all the time. The reality is occasional.

Still. Frustrating.

This is how our clients feel when performance problems both of our ability to resolve and some outside something (ISPs, networks, client computers) cause.

 

Why Ten

The question of why we run ten clusters came up recently. Off the top of my head, the answer was okay. Here is my more thoughtful response.

Whenever I have been in a conversation with a BEA (more recently Oracle) person on Weblogic, the number of nodes we run has invariably surprised them. Major banks serve ten times the number simultaneous users we have on a half dozen managed nodes or less. We have 130 managed nodes for production. Overkill?

There are some advantages they have.

  1. Better control over the application. WebCT hacked together an install process very much counter to the way BEA would have done it. BEA would have had one install the database, the web servers, and then deploy the application using either the console or command-line. WebCT created an installer which does all this in the background out of sight and mind of the administrator. They also created start and stop scripts which do command-line interaction of Weblogic to start the application. Great for automation and making it simple for administrators. It also lobotomies the console making many advanced things one could normally do risky. So now the console is only useful for some minor configuration management and monitoring.
  2. Better control over the code. When there is a performance issue, they can find what is the cause and improve the efficiency of the code. The best I can do is point out the inefficiencies to a company who chose as a priority a completely different codebase. If you do not have control over the code, then you give the code more resources.
  3. As good as Weblogic is at juggling multiple managed nodes, more nodes does not always equal better. Every node has to keep track of the others. The heart beats communicate through multicast. Every node sends out its own and listens for the same from all the others. Around twenty nodes they would miss occasional beats on their own. Thrown in a heavy work load and an overwhelmed node can miss enough missed beats it becomes marked as unavailable by the others. Usually at this point is when the monitors started paging me about strange values in the diagnostics. Reducing the number of nodes helped.

More resources means more nodes. We had two clusters with about 22 nodes (44 total) each when we hit a major performance wall. They were split into four clusters with 15 nodes each (60 total). Eventually these grew to over 22 nodes each again. At this point upgrading was out of the question. A complete overhaul with all new databases and web servers meant we could do whatever we wished.

The ideal plan was a cluster per client. Licenses being so expensive scrapped that plan.

Ten clusters with 13 managed nodes each was a reasonable compromise. More nodes while also using smaller clusters achieved both needs well. Empty databases also gave us a better restarting point. The databases still have grown to the point certain transactions run slowly just for 4 terms later. (I was hoping for 6.) Surviving the next two years will be a challenge to say the least. I wish we got bonuses for averting disasters.

Night School

I noticed a couple weeks back there are interesting spikes in the evening hours of Sunday through Wednesday. Just like morning/afternoon usage, the evening spikes diminish but even more so by comparison.

As I recall for Monday through Wednesday, when I first started, the evening traffic almost flatlined at 5pm and then dropped off at 11pm. Over time the spike has grown to the point we have more users active in the evening than during “business hours”.

In this graph, the numbers across the bottom are the week of the year. The numbers along the left side are the number of users active within the last 5 minutes.

Yaketystats

Really I have no data to say why the change in trend. (We are not 100% online and the majority of the classes we host are supplemental to face-to-face, with hybrid and totally online fighting for second place.) I hope the days of instructors teaching in a computer lab and having students follow along died a hard painful death. If so, then the amount of activity during the day would lessen some. Students and faculty would still go online during the day between classes. However, more student access to broadband at home would empower them to go online more often in the evening and increase the difference between day and evening user activity.

Identifying where each individual IP resides is hard. Doing so for many is more time than I would want to invest in the question. Campus vs. residential vs. corporate is relatively easy. However, “home” for a student could be on campus or residential. Maybe someone else knows better than me.

I guess this means we really ought to look at our automated operations which kick off at 10pm. WebCT recommended they be run when user activity is light or they could impact performance.

Bureaucratic Processes Stifle Idea Sex

Yesterday was the TED talk on what happens when ideas have sex. Go read that watch the video. I can wait.
😀

I also read about an issue regarding employees who are frustrated with mediocre performance by their organizations and low expectations which appeared in Federal Computer Week. (I’ve heard about people talking about this happening mostly everywhere.) Plus, I have an aunt who recently retired from federal service with interesting stories.

It seems like some organizations focus on the bad performance and ways of bringing everyone up to certain level. So they set new policies, hire many managers who focus is compliance, and focus on past screw ups not happening again. It’s like they have yet to learn focusing on those past screw ups make them vulnerable to new screw ups. For example, if everyone focuses on their blow out preventer to not have another BP oil spill, then they miss other components so the next accident will be in something like the riser cap containment system.

Okay, sure there was a problem. Our focus ought to be on identifying what people do well and having them do that thing. Then we offload the responsibilities they don’t do well on to people who will.

Most Recent Data

One of the common complaints instructors have about CE/Vista is the Tracking reports don’t have recent enough data. They are shown this for selecting the date range.

Select a Date Range for the Report
Select a Date Range for the Report

Including here the most recent time the tracking was processed (which the application already displays to the server administrator in background jobs) would help the instructor know whether the data is as recent as 4:00 am or 1:00pm.

Maybe when Tracking will run again ought to be displayed to the instructor so he or she knows it will run within the hour or the next morning. That might cut down on instructors running it again and again expecting it to magically show data which won’t be available until many hours later.

Administrators some times have to pick the best operational time to run Tracking. We have direct login checks running several times per hour. When Tracking is run every hour and these checks run at the same time, the time these direct login checks took spiked. Users also complained about poor performance. So we have these run in the wee hours of the morning when users are not generally on the system.

Tracking Specific File Use

CE/Vista Reports and Tracking displays summaries of activity. If an instructor seeks to know who clicked on a specific file, then Reports and Tracking falls down on the job.

Course Instructor can produce a report of the raw tracking data. However, access to the role falls under the Administration tab so people running the system need to make a user specifically to enroll themselves at the course level to get the reports. (Annoying.)

Instead the administrators for my campuses pass up to my level of support requests to generate reports. For providing these I have SQL to produce a report. This example is for users who clicked on a specific file. Anything in bold is what the SQL composer will need to alter.

set lines 200 pages 9999
col user format a20
col action format a32
col pagename format a80

clear breaks computes
break on User skip 1
compute count of Action on User

select tp.user_name "User",ta.name "Action",
      to_char(tua.event_time,'MM/DD/RR HH24:MI:SS') "Time",
      NVL(tpg.name,'--') "PageName"
  from trk_person tp, trk_action ta, trk_user_action tua,
      trk_page tpg, learning_context lc
  where tp.id = tua.trk_person_id
    and ta.id = tua.trk_action_id
    and tua.trk_page_id = tpg.id (+)
    and tua.trk_learning_context_id = lc.id
    and lc.id = 1234567890
    and tpg.name like '%filename.doc%'
  order by tp.user_name,tua.event_time
/

Output

  • User aka tp.user_name – This is the student’s account.
  • Action aka ta.name – This is an artifact of the original script. You might drop it as meaningless from this report.
  • Time aka tua.event_time – Day and time the action took place.
  • PageName aka tpg.name – Confirmation of the file name. Keep if using like in a select on this.

Considerations

I use the learning context id (lc.id aka learning_context.id) because in my multi-institution environment, the same name of a section could be used in many places. This id ensures I data from multiple sections.

The tricky part is identifying the file name. HTML files generally will show up as the name of in the title tag (hope the instructor never updates it). Office documents generally will show as the file name. Here are a couple approaches to determining how to use tpg.name (aka trk_page.name).

  1. Look at the file in the user interface.
  2. Run the report without limiting results to any tpg.name. Identify out of the results the name you wish to search and use: tpg.name = ‘page name

Most tracked actions do have a page name. However, some actions do not. This SQL is designed to print a “–” in those cases.

Email Harvesters

Good Sign I missed the story about brothers convicted of harvesting emails the first time. Well, I noticed a followup.

Back around 2001, the CIO received complaints about performance for the web server. So, I went log trolling to see what the web server was doing. A single IP dominated the HTTP requests. This one IP passed various last names into the email directory. Some quick research revealed Apache could block requests from that IP. That calmed things down enough for me to identify the owner of the IP. The CIO then bullied the ISP to provide contact information for the company involved.

Previous little adventures like this landed me a permanent job, so I jumped at similar challenges.

Well, a few years later, it happened again. This time my boss had made me develop a script for the dissemination of the anti-virus software package to home users. Basically, it used email authentication for verification if someone could get the download link. So, I applied the same technique to the email directory. Well, this upset some people who legitimately needed email addresses. So the human workers would provide email addresses to people with a legitimate need.

I’m glad since I’ve left, VSU no longer looks up email addresses for people. (I thought some of the requests questionable.) Also, my little email authentication script was before LDAP was available to the university. I think the new solution much better.

One the more vocal complainers about my having stopped non-VSU access to the email directory was my current employer. We apparently list email addresses for employees freely. Which makes me wonder how much spam we get is due to the brothers described at the beginning of this story? Or other email harvesters? Just hitting the send button potentially exposes the email address.

No worries. I’m sure Glenn is protecting me. 🙂

2nd Blackboard Blog

A blog without comments to me isn’t a blog. Blog posts are about stimulating discussion, so the comments are most important feature. Content without feedback is a publicity or news story not a blog. So Blackboard Blogs at educateinnovate.com isn’t really a blog.

Steve Feldman, Bb performance engineer, had the first Blackboard Inc blog with Seven Seconds. He mysteriously stopped last fall. 🙁

Ray Henderson, new Bb President for Learn, has a blog. Read this introduction post. He specifically wants discussion and dialog. Someone at Blackboard who understands The Cluetrain Manifesto? I am hopeful this is a sign of positive change.