Higher Ed Twitter List

Karlyn Morissette posted her Master Higher Ed Twitter List. Other than @eironae and @barbaranixon, I didn’t know anyone on the list. So I thought to post a list of higher education professionals I follow categorized by primary expertise.

Blackboard twitterers might be another post.

Those in bold are coworkers.

College / University / Departments

@atsu_its – A.T. Still University – IT Help Desk & Support
@BC_Bb – Butte College Blackboard System
@CTLT – Center for Teaching, Learning, and Technology @ Goucher College
@GeorgiaSouthern – Georgia Southern University
@ucblackboard – University of Cincinnati Blackboard Support

CE/Vista

@amylyne – Amy Edwards – CE/Vista DBA
@corinnalo – Corrina Lo – CE/Vista Admin
@elrond25 – Carlos Araya – CE/Vista Admin, Dr. C
@jdmoore90 – Janel Moore – CE/Vista Admin
@jlongland – Jeff Longland – CE/Vista Programmer
@lgekeler – Laura Gekeler – CE/Vista Admin
@ronvs – Ron Santos – CE/Vista Analyst
@sazma – Sam Rowe – YaketyStats
@skodai – Scott Kodai – former Vista Admin now manager
@tehmot – George Hernandez – CE/Vista DBA
@ucblackboard – UC Blackboard Admins

Faculty

@academicdave – David Parry – Emerging Media and Communications
@amberhutchins – Amber Hutchins – PR and Persuasion
@barbaranixon – Barbara Nixon – Public Relations
@captain_primate – Ethan Watrall – Cultural Heritage Informatics
@doctorandree – Andree Rose – English
@KarenRussell – KarenRussell – Public Relations
@mwesch – Mike Wesch – Anthropology
@prof_chuck – Chuck Robertson – Psychology

Information Technologist / Support

@aaronleonard – Aaron Leonard
@Autumm – Autumm Caines
@bwatwood – Britt Watwood
@cscribner – Craig Scribner
@dontodd – Todd Slater
@ECU_Bb_Info – Matt Long
@ekunnen – Eric Kunnen
@heza – Heather Dowd
@hgeorge – Heather George
@masim – ???
@mattlingard – Matt Lingard
@meeganlillis – Meegan Lillis
@soul4real – Coop

Assessment / Library / Research

@alwright1 – Andrea Wright – Librarian
@amylibrarian – Amy Springer – Librarian
@amywatts – Amy Watts – Librarian
@elwhite – Elizabeth White – Librarian
@kimberlyarnold – Kimberly Arnold – Educational Assessment Specialist
@mbogle – Mike Bogle – Research

Web Design / UI

@eironae – Shelley Keith

Director

@aduckworth – Andy Duckworth
@garay – Ed Garay
@grantpotter Grant Potter
@IDLAgravette – Ryan Gravette
@Intellagirl – Sarah B. Robbins
@tomgrissom – Tom Grissom

Technorati : , ,
Del.icio.us : , ,
Flickr : , ,

Forcing Weblogic’s Config.xml

Let’s nevermind why I am working on this in the first place. Namely…

  1. the Blackboard Learning Environment Connector introduced using the hostname and port for applet URLs in Vista 8 Blackboard,
  2. Blackboard dropped WebCT’s support for using a different port for an application when behind a load balancer.
So we found out we could use port 443 as the SSL listen port because we terminate SSL on the load balancer, Weblogic would not bind to port 443, but the Vista application would be tricked into displaying to the end user what we wish.
In the past week, we have put the correct config.xml in place multiple times and found it reverts back to an older version with the port we don’t want. The first time, I was lazy and did not shut down the Weblogic admin server because… well… that was the lazy practice I had used in Weblogic 8.1 and had not had a problem. My shell record shows it was correct then. Within hours it wasn’t correct anymore.
So, we found a few things…
  1. a copy of the config.xml is stored WEBCTDOMAIN/servers/domain_bak/config_prev/,
  2. all files in WEBCTDOMAIN/config/ are pushed to the nodes,
  3. to change this value in the Weblogic console requires turning on a feature to bind to the SSL listen port.
Additionally, we think research into this would show Weblogic stores this information in memory. It will then write changes it makes to the file back to disk on the admin node (destroying our change). Managed nodes will then pick up the change.
The latest shot at this is to purge the #1 and #2 on both the admin server and managed nodes, put the right file in place on the admin nodes, and see if it reverts again.
So now I’ve got to write a script to periodically check if the nodes have the wrong listen port and email us should it change.

IMS Import Error When Node Is Down

This is what I got when a node was down while I attempted to do an IMS import in Blackboard CE/Vista.

Failed to upload files, exiting.
Cause could include invalid permission on file/directory,
invalid file/directory or
repository related problems

The keywords permission, file, and directory in this would have sent me anywhere but to the right place. The keyword repository made me suspicious the node had a worse issue than just bad permissions. So I looked for the most recent WebCTServer log and found it to be a week old. Verifying the last messages in the log confirmed it had been down for a week.
🙁

To see anything in the log questioning whether or not the node was running would have saved me lots of time this morning.

Added to my .bashrc a couple lines to provide a visual indicator how many are running.

JAVA_RUNNING=`ps -ef | grep [j]ava | grep -c [v]ista`
echo ”  — No. Vista processess running = $JAVA_RUNNING”

Better might even be to have it evaluate whether less than one or more than two (or three) are running. If so, then put something obvious the world is falling. Maybe later. Took me just a couple minutes to write and test what I have. The rest will come after I decide what I really want. 🙂

Also, it wasn’t running because a coworker had run into a situation where the fifth node would not start. She thought maybe it was because the number of connection Oracle would accept was not high enough. I suggested a simple test would be to shut down a node and see if the problem one suddenly works. I happened to be working with the one she shut down for the test. It happens she had just started a script to bring them up when I asked.

Stats

Dreamhost collects the access and error logs for the web site domains they host for me. The stats are crunched by Analog. The numbers are okay. I much prefer Google Analytics. (Even AWStats is better.) Analog is good enough.

While at Bbworld, Nicole asked me about the hits to her wedding web site. She made it sound like then she and Ashley had the data but just needed to know how to interpret the data? Now a couple days later they didn’t have the data. Instead, they ran into a password issue.

Shell / FTP:

What I had suggested to Nicole was Ashley could find the stats by going to the logs/william-nicole.com to find the data. (Actually it was logs/william-nicole.com/http/html)

Web:

Since, only Ashley’s user can access the stats through the shell / FTP route, I went into my admin panel to add Nicole and myself a user to access the stats. I erroneously assumed the user with access to manage the content (Ashley) would have access to the stats. Instead, Dreamhost only automatically grants the panel user (me) access to stats. Doh! So I ended up creating them both accounts.

Shameless Plugs:

Nicole’s site is http://william-nicole.com/.

Another site I am hosting for Shel is http://artistictraveler.nu/.

WordPress Error: This file cannot be used on its own.

In posting a comment to a friend’s WordPress blog, it came up with the error:

Error: This file cannot be used on its own.

I was responding to a comment, so I doubted that he broke his blog between making a comment and my response. So I went looking though my own install. Essentially, at a shell I used

find . -exec grep -l "This file cannot be used on its own." {} \;

to locate the file involved is wp-comments-popup.php. This file contains code which checks for the HTTP_REFERER variable has specific values equal to the path and file name for the comments page. If this is not the case, then it should throw this error. The file mentioned in the error is wp-comments.php.

Its seems that I had configured my web browser not to pass the HTTP referrer to web servers, so the check failed and threw this error.

Maybe the WordPress developer who designed this has no idea about the ability of web browsers not to send a referrer. Searching for the error on the WP site yielded nothing. From the tons of comments about people hitting this error, lots of people turn off sending referrers.

Solution for those leaving comments: If you attempt to leave a comment and see this error, then enable referrers. WordPress actually has a decent article on enabling HTTP referrers for a number of different pieces of software.

More friendly error for WP blog owners: Edit wp-comments-popup.php. Change

die (‘Error: This file cannot be used on its own.’);

to

die (‘Learn how to <a href=”http://codex.wordpress.org/Enable_Sending_Referrers”>enable HTTP referrers</a> to fix this. ‘);

Better Way to Count

Our awesome sysadmins have put the user agent into our AWStats so we are tracking these numbers now. They discovered something I overlooked. Netscape 4.x is 10 times more used than 7.x or 8.x. Wowsers! Some people really do not give up on the past.

Back in the Netscape is dead post, I used this to count the Netscape 7 hits.

grep Netscape/7 webserver.log* | wc -l

Stupid! Stupid! Stupid! The above requires running for each version of Netscape. This is why I missed Netscape 4.

This is more convoluted, but I think it its a much better approach.

grep Netscape webserver.log* | awk -F\t ‘{print $11}’ | sort | uniq -c | sort -n

It looks uglier, but its much more elegant. Maybe I ought to make a resolution for 2008 to be elegant in all my shell commands.

This version first pulls any entries with Netscape in the line. Next, the awk piece reports only the user agent string. The first sort puts all the similar entries next to each other so the uniq will not accidentally duplicate. The -c in the uniq counts. The final sort with the -n orders them by the uniq’s count. The largest will end up at the bottom.

Everything to Everyone

This is intended to be a more thoughtful response to Laura regarding Course Management Systems and the need for innovation.

Currently, Course Management Systems are bloatware. They got this way by trying to provide everything to everyone. One instructor wants a feature, the university presses for this feature, the CMS programmers put in the feature. Okay, maybe not even 1/2 the time, but given that we have about 15,000 instructors, even a tenth getting a tenth of what they want adds up very quickly. Where they overlap is where companies feel the pressure to add these features.

In my experience, people have found CE and Vista clunky and difficult to use since 2001ish. Basically, that was when the shiny newness wore off at Valdosta State. If anything, then its gotten worse over time. Personally, I think this is the case because its not easy to use. Part of this lack of ease is because of the sheer number of possible actions required to accomplish frequent tasks. Another part is the overwhelming possible branches one might take [1] in the decision tree. Part of what makes us intelligent is visualizing the goal and taking the steps necessary to get is there. When software is not easy to use, the users feel stupid because they cannot figure out how to get to the goal.

Think about the complaints we have been seeing about CE6 from people using CE4. They are griping about features they are used to using disappearing. No one wants to lose the features or options they frequently use. They also wish the features or options they never use would disappear.

From what I’ve seen, instructors will make use of what the university
provides. When universities don’t provide what instructors want, then
these instructors will find what they want elsewhere and make use of
it. Large companies take a long time to integrate new features. By the
time they figure out the user base wants something, incorporate it,
release it, and customers implement it, the users have become used to
using it elsewhere are not attracted to a feature they’ve been using
for years elsewhere. So then we invoke FERPA and whatever to move them
to the CMS which is more clunky than what they were using already.

So enough with my griping… What is the solution? Well, maybe we should think about what a Course Management System should do?

  1. Course management: This means it provides the university administration means by which they can control access to classes. Its not for the faculty so much as provosts, vice presidents, and registrars to be comfortable the university is not allowing students to take something without paying the institution.
  2. Learning: Specifically, these are communication of concepts and evaluation of concept comprehension.

In a nutshell, #1 is the course list and administration screens while #2 is the course internals. If our focus is recreating the university in an online environment, then the CMS is the right approach. By importing the data from the student information system, we build a hierarchy just like the course catalog and put students into virtual representations of these classes. This mindset is where instructors want to build classes that consist of their lectures, the assignments, and the assessments. Its the face-to-face class online. Thankfully, online classes are moving to using tools to better utilize the advantages of the WWW. However, the focus is more towards improving peer discussion.

Maybe this approach isn’t the best one for learning? Last month I read a few articles off a web site advocating a different model: students gathering and creating information themselves (Personal Learning Environment). The instructor in this model becomes more of a mentor like independent study or how universities functioned at the time of our Founding Fathers. I’ve been hearing this is the direction education ought to take for over a decade now. However, I think its unlikely as its easier on the instructor to use the bird shot approach. 🙂

My Approach: The CMS is only an integration framework to provide access to tools. It doesn’t try to provide these tools at all. There are hundreds of wiki products who are better at some things depending on how its used. Why should the CMS think it can do it better than all of them? Same thing applies to blogs, social bookmarking, file sharing, etc. This means universities will provide a number of these tools and support dozens of different applications and integrate them all. We will have to better understand data flow, security, how all these pedagogically work well together. It’ll be a nightmare.

[1] One of things I unfortunately still do is recreate the user’s actions by figuring out what they clicked on in the recorded session. Much of the problems we see are user error, probably through not understanding the ramifications of the action.

Fun With Oracle Environment Variables

This is more or less for the next time I lose all my brain cells from not working over the weekend….

Critical Oracle variables:

  1. ORACLE_HOME
  2. ORACLE_SID

Pretty much any script that deals with Oracle needs the value of these.

Cron doesn’t have these variables, so use export VARIABLE to provide them to the shell. Also, “export VARIABLE=value” doesn’t seem to work. So use “VARIABLE=value; export VARIABLE”.

Oh, the joys of shell scripting.
🙂

BbWorld Presentation Redux Part I – Automation

Much of what I might write in these posts about Vista is knowledge accumulated from the efforts of my coworkers.

I’ve decided to do a series of blog posts on our presentation at BbWorld ’07, on the behalf of the Georgia VIEW project, Maintaining Large Vista Installations (2MB PPT). I wrote the bit about tracking files a while back in large part because of the blank looks we got when I mentioned in our presentation at BbWorld these files exist. For many unanticipated reasons, these may not be made part of the tracking data in the database.

Automation in this context essentially is the scheduling of tasks to run without a human needing to intercede. Humans should spend time on analysis not typing commands into a shell.

Rolling Restarts

This is our internal name for restarting a subset (consisting of nodes) of our clusters. The idea is to restart all managed nodes except the JMS node, usually one at a time. Such restarts are conducted for one of two reasons: 1) have the node pick up a setting or 2) have Java discard from memory everything. The latter is why we restart the nodes once weekly.

Like many, I was skeptical of the value of restarting the nodes in the cluster once weekly. Until, as part of the Daylight Savings Time patching, we provided our nodes to our Systems folks (hardware and operating systems) and forgot to re-enable the Rolling Restarts for one batch. Those nodes starting complaining about issues into the second week. Putting back into place the Rolling Restarts eliminated the issues. So… Now I am a believer!

One of my coworkers created a script which 1) detects whether or not Vista is running on the node, 2) only if Vista is running does it shut down the node, 3) once down, it starts up the node, and 4) finally checks that it is running. Its pretty basic.

Log cleanup to preserve space

We operate on a relatively small space budget. Accumulating logs infinitum strikes us as unnecessary. So, we keep a months’ worth of logs for certain ones. Others are rolled by Log4j to keep a certain number. Certain activities can mean only a day’s worth are kept, so we have on occasion increased the number kept for diagnostics. Log4j is so easy and painless.

We use Unix’s find with mtime to look for files 30 days old with specific file names. We delete the ones which match the pattern.

UPDATE 2007-SEP-18: The axis files in /var/tmp will go on this list, but we will delete any more than a day old.

Error reporting application, tracking, vulnerabilities

Any problems we have encountered, we expect to encounter again at some point. We send ourselves reports to stay on top of potentially escalating issues. Specifically, we monitor for the unmarshalled exception for WebLogic, that tracking files failed to upload, and we used to collect instances of a known vulnerability in Vista. Now that its been patched, we are not looking for it anymore.

Thread dumps

Blackboard at some point will ask for thread dumps at the time the error occurred. Replicating a severe issue strikes us as bad for our users. We have the thread dumps running every 5 minutes and can collect them to provide Blackboard on demand. No messing with the users for us.

Sync admin node with backup

We use rsync to keep a spare admin node in sync with the admin node for each production cluster. Should the admin node fail, we have a hot spare.

LDIS batch integration

Because we do not run a single cluster per school and the Luminis Data Integration Suite does not work with multiple schools for Vista 3 (rumor is Utah has it working for Vista 4), we have to import our Banner data in batches. The schools we host send the files, our expert reviews the files and puts them in place. A script finds the files and uploads each in turn. Our expert can sleep at night.

Very soon, we will automate the running of the table analysis.

Anyone have ideas on what we should automate?