Higher Ed Twitter List

Karlyn Morissette posted her Master Higher Ed Twitter List. Other than @eironae and @barbaranixon, I didn’t know anyone on the list. So I thought to post a list of higher education professionals I follow categorized by primary expertise.

Blackboard twitterers might be another post.

Those in bold are coworkers.

College / University / Departments

@atsu_its – A.T. Still University – IT Help Desk & Support
@BC_Bb – Butte College Blackboard System
@CTLT – Center for Teaching, Learning, and Technology @ Goucher College
@GeorgiaSouthern – Georgia Southern University
@ucblackboard – University of Cincinnati Blackboard Support

CE/Vista

@amylyne – Amy Edwards – CE/Vista DBA
@corinnalo – Corrina Lo – CE/Vista Admin
@elrond25 – Carlos Araya – CE/Vista Admin, Dr. C
@jdmoore90 – Janel Moore – CE/Vista Admin
@jlongland – Jeff Longland – CE/Vista Programmer
@lgekeler – Laura Gekeler – CE/Vista Admin
@ronvs – Ron Santos – CE/Vista Analyst
@sazma – Sam Rowe – YaketyStats
@skodai – Scott Kodai – former Vista Admin now manager
@tehmot – George Hernandez – CE/Vista DBA
@ucblackboard – UC Blackboard Admins

Faculty

@academicdave – David Parry – Emerging Media and Communications
@amberhutchins – Amber Hutchins – PR and Persuasion
@barbaranixon – Barbara Nixon – Public Relations
@captain_primate – Ethan Watrall – Cultural Heritage Informatics
@doctorandree – Andree Rose – English
@KarenRussell – KarenRussell – Public Relations
@mwesch – Mike Wesch – Anthropology
@prof_chuck – Chuck Robertson – Psychology

Information Technologist / Support

@aaronleonard – Aaron Leonard
@Autumm – Autumm Caines
@bwatwood – Britt Watwood
@cscribner – Craig Scribner
@dontodd – Todd Slater
@ECU_Bb_Info – Matt Long
@ekunnen – Eric Kunnen
@heza – Heather Dowd
@hgeorge – Heather George
@masim – ???
@mattlingard – Matt Lingard
@meeganlillis – Meegan Lillis
@soul4real – Coop

Assessment / Library / Research

@alwright1 – Andrea Wright – Librarian
@amylibrarian – Amy Springer – Librarian
@amywatts – Amy Watts – Librarian
@elwhite – Elizabeth White – Librarian
@kimberlyarnold – Kimberly Arnold – Educational Assessment Specialist
@mbogle – Mike Bogle – Research

Web Design / UI

@eironae – Shelley Keith

Director

@aduckworth – Andy Duckworth
@garay – Ed Garay
@grantpotter Grant Potter
@IDLAgravette – Ryan Gravette
@Intellagirl – Sarah B. Robbins
@tomgrissom – Tom Grissom

Technorati : , ,
Del.icio.us : , ,
Flickr : , ,

Rock Eagle Debrief

GeorgiaVIEW

  1. SMART (Section Migration Archive and Restore Tool) created for us by the Georgia Digital Innovation Group seemed well received. I’m glad. DIG worked tirelessly on it on an absurdly short schedule.
  2. Information is strewn about in too many places. There isn’t one place to go for information. Instead between Blackboard, VistaSWAT, and GeorgiaVIEW about 29. I amazed I do find information.
  3. Blackboard NG 9 is too tempting for some.
  4. Vista does DTD valdiation but not very well. We need to XML validation before our XML files are run. As we do not control the source of these files and errors by those creating the files cause problems, we run them in test before running in production. I am thinking of something along the lines of validating the file and finding the errors and reporting to the submitter the problems in the file. Also, it should do XML schema validation so we can ensure the data is as correct as possible before we load it.
Yaketystats
  1. If you run *nix servers, then you need Yaketystats. I have been using it for 2 years. It revolutionized how I go about solving problems. If you are familiar with my Monitoring post, then this is the #2 in that post.
That is all for now. I am sure I will post more later.

Stats

Dreamhost collects the access and error logs for the web site domains they host for me. The stats are crunched by Analog. The numbers are okay. I much prefer Google Analytics. (Even AWStats is better.) Analog is good enough.

While at Bbworld, Nicole asked me about the hits to her wedding web site. She made it sound like then she and Ashley had the data but just needed to know how to interpret the data? Now a couple days later they didn’t have the data. Instead, they ran into a password issue.

Shell / FTP:

What I had suggested to Nicole was Ashley could find the stats by going to the logs/william-nicole.com to find the data. (Actually it was logs/william-nicole.com/http/html)

Web:

Since, only Ashley’s user can access the stats through the shell / FTP route, I went into my admin panel to add Nicole and myself a user to access the stats. I erroneously assumed the user with access to manage the content (Ashley) would have access to the stats. Instead, Dreamhost only automatically grants the panel user (me) access to stats. Doh! So I ended up creating them both accounts.

Shameless Plugs:

Nicole’s site is http://william-nicole.com/.

Another site I am hosting for Shel is http://artistictraveler.nu/.

Firefox Weirdness

Our Systems folks upgraded the code running Stats web site they let us use. This morning, was the first time I looked at it since the upgrade.

Naturally, it was not working for me. Figuring it was my Mozilla Firefox’s fault, I tried the same web page in Flock. (Firefox with some other apps but none of Add-Ons, formerly the Extensions really plug-ins, I use in Firefox.) Flock showed it fine, so I “knew” one of three Add-Ons Extensions had to be the culprit: Greasemonkey, NoScript, or FasterFox. I disabled all three and found the site worked as it should. So I enabled each in turn. The site still works.

Enabling one of the three should have rebroken the web site. That this failed to happen could mean:

  1. Add-Ons Extensions did not break it. Something out of my control did.
  2. Add-Ons Extensions did not break it. Something I don’t remember changing did.
  3. Disabling and enabling Add-Ons Extensions changes their configuration and their impact on pages.

Annoying.

Page View Metric Dying

First Metricocracy measured hits. Pictures and other junk on pages inflated the results so Metricocracy decided on either unique visitors or page views. Now, the Metricocracy wants us to measure attention. Attention is engagement, how much time users spend on a page.

What do we really want to know? Really it is the potential value of the property. The assumption around attention is the longer someone spends on a web site, the more money that site gains in advertisement revenue. The rationale being users who barely glance at pages and spend little time on the site are not going to click ads. Does this really mean users who linger and spend large amounts of time on the site are going to click more ads?

This means to me attention is just another contrived metric which doesn’t measure what is really sought. I guess advertisement companies and the hosts brandishing them really do not want to report the click through rates?

My web browsing habits skew the attention metric way higher than it ought to be. First, I have a tendency to open several items in a window and leave them lingering. While my eyes spent a minute looking the content, the page spent minutes to hours in a window… waiting for the opportunity. Second, I actively block images from advertisement sources and block Flash except when required.

As a DBA, page views also has debatable usefulness. On the one hand we could use it because it represents a count of objects requiring calls to the database and rendering by application and web server code. Hits represent all requests for all content, simple or complex, so is more inclusive. Bandwidth throughput represents how much data is sucked out or pushed into the systems.

We DBAs also provide supporting information to the project leaders. Currently they look at the number of users or classrooms who have been active throughout the term. Attention could provide another perspective to enhance the overall picture of how much use our systems get.

Cat Finnegan, who conducts research with GeorgiaVIEW tracking data, measures learning effectiveness. To me, that is the ultimate point of this project. If students are learning with the system, then it is successful. If we can change how we do things to help them learn better, then we ought to make that change. If another product can help students learn better, then that is the system we ought to use.

Ultimately, I don’t think there is a single useful metric. Hits, unique users, page views, attention, bandwith, active users, etc., all provide a nuanced view of what is happening. I’ve used them all for different purposes.

Computer v Human Diagnosis

I just finished How Doctors Think yesterday.

First impression was doctors don’t spend very much time thinking and gathering information to make a diagnosis. That impress struck a very negative chord with me as it sounds like in my profession of database and computer administration we spend hours picking apart the data we have to diagnose even minor sounding issues.

The better impression ought to be doctors spend very little time with a seemingly routine diagnosis. When confounded they spend more time doing analysis. They also have to deal with the patient’s lack of patience.

In tier 3 support, we spend even less time with routine issues. “Try another web browser and call me in the morning.” anyone? When we don’t know this gives the “patient” something to do while we go investigate the real cause (looking at logs or stats). Unfortunately, our computer “patients” want resolutions in minutes or maybe hours. Few find taking a few years to heal a computer problem acceptable.
🙁

tags: , , ,

Better Way to Count

Our awesome sysadmins have put the user agent into our AWStats so we are tracking these numbers now. They discovered something I overlooked. Netscape 4.x is 10 times more used than 7.x or 8.x. Wowsers! Some people really do not give up on the past.

Back in the Netscape is dead post, I used this to count the Netscape 7 hits.

grep Netscape/7 webserver.log* | wc -l

Stupid! Stupid! Stupid! The above requires running for each version of Netscape. This is why I missed Netscape 4.

This is more convoluted, but I think it its a much better approach.

grep Netscape webserver.log* | awk -F\t ‘{print $11}’ | sort | uniq -c | sort -n

It looks uglier, but its much more elegant. Maybe I ought to make a resolution for 2008 to be elegant in all my shell commands.

This version first pulls any entries with Netscape in the line. Next, the awk piece reports only the user agent string. The first sort puts all the similar entries next to each other so the uniq will not accidentally duplicate. The -c in the uniq counts. The final sort with the -n orders them by the uniq’s count. The largest will end up at the bottom.

BbWorld Presentation Redux Part II – Monitoring

Much of what I might write in these posts about Vista is knowledge accumulated from the efforts of my coworkers.

This is part two in a series of blog posts on our presentation at BbWorld ’07, on the behalf of the Georgia VIEW project, Maintaining Large Vista Installations (2MB PPT).

Part one covered automation of Blackboard Vista 3 tasks. Next, let’s look at monitoring.

Several scripts we have written are in place to collect data. One of the special scripts connects to Weblogic on each node to capture data from several MBeans. Other scripts watch for problems with hardware, the operating system, database, and even login to Vista. Each server (node or database) has, I think, 30-40 monitors. A portion of items we monitor is in the presentation. Every level of our clusters are watched for issues. The data from these scripts are collected into two applications.

  1. Nagios sends us alerts when values from the monitoring scripts on specific criteria fall outside of our expectations. Green means good; yellow means warning; red means bad. Thankfully none in our group are colorblind. Nagios can also send email and pages for alerts. Finding the sweet spot where we get alerted for a problem but avoid false positives perhaps is the most difficult.
  2. An AJAX application two excellent members of our Systems group created called internallyl Stats creates graphs of the same monitored data. Nagios tells us a node failed a test. Stats tells us when the problem started, how long it lasted, and if others also displayed similar issues.We also can use stats to watch trends. For example, we know two peaks by watching WIO usage rise to a noonish peak slough by ~20% and peak again in the evening fairly consistently over weeks and months.

We also use AWStats to provide web server log summary data. Web server logs show activity of the users: where they go, how much, etc.

In summary, Nagios gives us a heads up there is a problem. Stats allows us to trend performance of nodes and databases. AWStats allows us to trend overall user activity.

Coradiant TrueSight was featured in the vendor area at BbWorld. This product looks promising for determining where users encounter issues. Blackboard is working with them, but I suspect its likely for Vista 4 and CE 6.

We have fantastic data. Unfortunately, interpreting the data proves more complex. Say the load on a server hosting a starts climbing, its the point we get pages and continues to climb. What does one do? Remove it from the cluster? Restart it? Restarting it will simply shift the work to another node in the cluster. Say the same happens with the database. Restarting the database will kick all the users out of Vista. Unfortunately, Blackboard does not provide a playbook on what to do with every support possibility. Also, if you ask three DBAs, then you will likely get three answers.
😀

Its important to balance the underreaction and overreaction. When things go wrong, people want us to fix the problem. Vista is capable of handling many faults and not handling very similar faults. The link example was a failed firewall upgrade. I took a similar tact with another firewall problem earlier this week. I ultimately had to restart the cluster that evening because it didn’t recover.

Part three will discuss the node types.

What is a Super Speeder?

There is a bill in the GA legislature to additionally fine people for excessive speeding. Fine. Good even. However, it disturbs me that people quote the wrong statistics as rationale and tie it to emotional cues.

Slowing down ‘super speeders’ on Georgia highways is a super idea – gainesvilletimes.com:

Last year in Georgia, 1,700 people lost their lives in traffic accidents; that averages to about one person every five hours. These aren’t benign statistics, folks. These are real people — mommas and daddies and sons and daughters, friends and neighbors — people who could and should still be with us.

People losing their lives in traffic accidents is quite serious. However, this isn’t 1,700 people who lost their lives because of excessive speeding. According to Weitz & Luxenberg (for 2003), speed was a factor in only about 20% of these traffic accident deaths. This is at the same level as deaths associated with alcohol (MADD for 2005 claims it higher: 35% and 30% in 2005). W&L also claim people were not wearing a seatbelt in half of these deaths? If that is true, then let’s raise the fine on not wearing a seat belt by $100 a year until this goes down to less than 5%.

Airbags are associated with an increase in deaths?

This map of crash deaths in GA is scary: 1) the map is unreadable unless you take the picture and look at the original JPG, 2) the lack of understanding of greater than vs less than. Plus, I’d rather it be deaths by percentage of amount of traffic than just deaths. More cars = more opprtunities for people to do stupid things. For example, I am impressed by the low numbers of deaths in certain counties along I-75 and I-85.