Preserving CE/Vista Settings

I’ve been asked for notes about this a few times. So here’s a blog post instead.
🙂

A coworker is working on scripting our updates. We lost the Luminis Message Adapter settings in applying the patch to the environment we provide to our clients. Fortunately, those settings are maintained by us not our clients. So I pushed those settings back very easily. Unfortunately, it points to the need to capture the settings for the potential purpose of restoring the settings.

In Oracle databases, this is pretty easy. As the schema user, run the following. It does some intentional things. First, we have multiple institutions, so the breaks make identifying which institution easier. Second, the same label for multiple forms gets confusing, so I am sorting by setting description id under the theory these ids are generated at the time the page is created, so the same tools will float together. (The last modified time stamp is probably unnecessary, I used it in an earlier version and left it just in case Vista for whatever reason added a new setting for the same label instead of modifying the existing one.) This can be spooled both before and after the upgrade. Use diff or WinMerge to compare the versions. Anything lost from the before version should be evaluated for inclusion adding back to the settings.

col lc_name format a50
col setting_value format a80
col label format a80
col lock format 999
col child format 999

clear breaks computes
break on lc_name skip 1

select learning_context.name lc_name, settings_description.label, settings.setting_value,
settings.locked_flag “lock”, settings_description.inheritable_flag “child”
from learning_context, settings, settings_description
where settings.settings_desc_id = settings_description.id
and settings.learning_context_id = learning_context.id
and learning_context.type_code in (‘Server’,’Domain’, ‘Institution’,’Campus’,’Group’)
order by learning_context.name, settings.settings_desc_id
/

An example of the multiple forms issue is external authentication. CE/Vista provides an LDAP (A) and an LDAP (B). The settings_description.label for both is contextmgt.settings.ldap.source. The settings_description.name for both is source. It looks like each of the two identical labels has a different settings.settings_desc_id value depending on whether it is A or B. To me it seems lame to use the same label for two different ids.

The most vulnerable parts of the application to lose settings during an update are the System Integration settings. A mismatched Jar on a node will wipe all the settings associated with that Jar.

However, I can see using this to capture the settings as a backup just in case an administrator or instructor wipes out settings by mistake. Yes, this is scope creep. Create a backup of the settings table to actually preserve the settings.

create table settings_backup_pre_sp2hf1 tablespace WEBCT_DATA as select * from settings;

Contexts: As a server admin, I maintain certain settings and push those down. Each client has control over some other settings and may push those down from the institution context. Maybe some are creating division and group admins? Maybe some instructors are changing things at the course or section levels. I may end up capturing everything?

Restoration: The whole purpose of preserving the settings is to restore them later. There are a couple methods in theory:

  1. Providing the settings to a human to re-enter. The labelling issue makes me question the sanity of trying to explain this to someone.
  2. Update the database directly would just need settings.id ensure it is the right location. Maybe dump out the settings in the format of an update command with labels on each to explain the context? Ugh.

If settings were not so easily lost, then this would be so much easier.

View: Another table of interest is the settings_v view. (Redundant?) The only reason I don’t like this view is it reports the values for every learning context which makes reporting off it much, much longer. For example, the encryption key for a powerlink is listed 8 places in settings/settings_description and 18,769 places in settings_v.

The DVR Trap

Recorded an episode of Psych because I know people who like it. Its okay, but I probably won’t make a season pass for it. 

Skipping past the commercials, I recognized the characters, so I stopped. Only to find myself watching a commercial featuring the show’s characters.

That is SO wrong. Smart way to catch those of us skipping past the advertisements. Guess I’ll just have to get better as skipping with the TiVo.

Course Management Systems are Dead!

Heh. Blackboard Vista is headed for a brick wall? Who knew?

7. Course Management Systems are Dead! Long Live Course Management Systems! Proprietary course management systems are heading for a brick wall. The combination of economic pressures combined with saturated markets and the maturing stage of the life cycle of these once innovative platforms means that 2009 may well be the year of change or a year of serious planning for change. Relatively inexpensive and feature-comparable open source alternatives combined with some now learned experience in the process of transition from closed to open systems for the inventory of repeating courses makes real change in this once bedrock of education technology a growing possibility. As product managers and management view these trend lines, I think we might see incumbent players make a valiant effort to re-invent themselves before the market drops out from underneath them. Look for the number of major campuses moving (or making serious threats to move) from closed systems to open ones to climb in the year ahead. The Year Ahead in Higher Ed Technology

It is true the big player in proprietary CMS / LMS / VLE software has lagged in innovation for quite a while. Remember though Blackboard bought WebCT and kept around the other product while hemorrhaging former WebCT employees. That alone kept them extremely busy not to lose every customer they bought. The next version, Blackboard 9 should be available soon. That is the litmus test for their future success.

Bb9 is a newer version of Academic Suite, aka Classic. There is no direct upgrade path from CE / Vista to Bb9. There is a Co-Production upgrade path where one can run both versions side-by-side with a portal interface to access either version without having to login again. Content still has to be extracted from the old and placed in the new. (Since we are running Vista 3 and Vista 8 side-by-side now, this doedsn’t give me warm fuzzies.) This was the upgrade path some WebCT and Blackboard clients took getting from Vista 3 to 4 only to find Vista 4 was junkware. Similarly, those leaving CE4 for CE6 were frustrated by the move. So, I would predict:

  1. Those on Classic 8 now will go to Blackboard 9 ASAP.
  2. Smaller colleges on CE 8 who through turnover no longer have the people burned by the CE4->CE6 migration will probably move to Blackboard 9 this summer prior to Fall.
  3. Smaller colleges on CE 8 who still remember will migrate after AP1 (maybe a year after Bb9 release).
  4. Larger colleges on CE or Vista 8 will move some time between AP1 and AP2.
  5. Consortia groups like GeorgiaVIEW, Utah State System, or Connecticut State University System will wait and see.

That last group doesn’t take change easily. They have the nimbleness of a Supertanker cargo ship.

I am still waiting for the tweets about Moodle and Sakai, the open source alternatives, to change from in general “X sucks, but at least its not Blackboard.” to “X is the best there is.” If “at least its not Blackboard” is the only thing going for the software, then people will stay where they are to see where things go. There needs to be compelling reasons to change.

Unfortunately the cries of the students and the faculty in the minority are not enough. Most people are happy enough. They can accomplish the important things. They get frustrated that IT took the system down, data center power issues, network issues, or a performance issue. None of which go away by picking FOSS.

reCAPTCHA and Chrome

Was using this RSVP form with Google Chrome and found the reCAPTCHA was telling me I repeatedly failed the Turing test. After the sixth time, I decided it might be my browser, so I tried it in Firefox which worked fine.

Curious, I went looking for a possible problem between reCAPTCHA and Chrome. According to a post there, the Transitional XHTML DOCTYPE is the cause. Changing that DOCTYPE to Strict ought to fix the issue. Given the audience, I doubt there is anyone else using Chrome to fill it. So fixing it probably isn’t worth it to them.

Interesting. I’ll have to look into issues with Chrome and the XHTML Transitional DOCTYPE.

Technorati : , reCAPTCHA,
Del.icio.us : , ,
Flickr : , ,

Labels

This started out as a comment to Adrian, but I it got so long it may as well be a post on its own….

The significance of racial labels is not in identifying the genetic makeup of individuals. The significance is in how the labels were used to enforce segregation long before the American Revolution. Before slaves in the United States were freed in 1865, defining who was Black was to identify who was eligible to be held in slavery and have ownership of property. There were grave concerns about mixing owners and slaves resulting in slaves gaining their freedom, especially once capturing them from Africa was no longer allowed. Defining race was about control then. Even in the more than one hundred years after the slaves were freed, defining who was Black was about control. Instead of who could be forced into slavery, the definitions of who is Black identified who could be excluded from power.  The fear was mixed people using the laws to somehow get access to power. Only since Affirmative Action has it become in any way beneficial for others to have less than pure European descent.

Adrian remarked many of us have ancestors which keep us from being purely from one or another group. Chatting with George and Lorenia yesterday, George pointed out even in Europe, southern Spain and Italy confounds the stereotype. Our increasing understanding of genetics and culture invalidates race as a useful means of describing individuals. Individuals have genetic markers linking them all over the globe. We are one species. My favorite example PBS show indicating the women described as Amazons moved to western Mongolia.

“The earth is but one country and mankind its citizens.” – Baha’u’llah

Finding Sessions

Clusters can making finding where a user was working a clusterf***. Users end up on a node, but they don’t know which node. Heck, we are ahead of the curve to get user name, date, and time. Usually checking all the nodes in the past few days can net you the sessions. Capturing the session ids in the web server logs usually leads to finding an error in the webct logs. Though not always. Digging through the web server logs to find where the user was doing something similar to the appropriate activity consumes days.

Blackboard Vista captures node information for where the action took place. Reports against the tracking data provide more concise, more easily understood, and more quickly compiled. They are fantastic for getting a clear understanding of what steps a user took.

Web server logs contain every hit which includes every page view (well, almost, the gap is another post). Tracking data represents at best 25% of the page views. This problem is perhaps the only reason I favor logs over tracking data. More cryptic data usually means a slower resolution time not faster.

Another issue with tracking is the scope. When profiling student behavior, it is great. The problem is only okay data can be located for instructors while designers and administrators are almost totally under the radar. With the new outer join, what we can get for these oblivious roles has been greatly expanded.

Certainly, I try not to rely too much on a single source of data. Even I sometimes forget to do so.

Turing Digitalization

Some 60 million CAPTCHAs are solved daily according to Luis von Ahn (on Wired Science on PBS). His technology project reCAPTHCA will use unknown words in these challenges for solving the unknown words in OCR digitalizing books to solve these words in an a quasi-automated sort of way.

I wonder though. Even if reCAPTCHA a) becomes the default at major sites like Yahoo or Google and b) is solved 100% right ever time, then how many books would be completed per day? Certainly no one really comments on this blog, so its almost why bother. (hint, hint)

tag: ,

UPDATE: Trying to clarify. reCAPTCHA integrates two technologies.

Optical Character Recognition always has questionable results. The worse the quality of the text (age or damage), the less capable the software. It takes a human on average about 10 seconds to recognize and provide the correct spelling of a piece of unknown text.

CAPTCHAs are the little pictures used to verify you are a human and not a spammer at various web sites. The problem is coming up with good digital letters OCR software cannot easily recognize.

Luis’ reCAPTCHA idea is if OCR software has trouble with a piece of text from these scanned books, then they have would make excellent candidates for objects to confuse the spammer bots trying to defeat CAPTCHAs. At the same time, humans validate the correctness of the unknown words where the OCR was confused.

Better?

Coradiant TrueSight

Several of us saw a demo of Coradiant Truesight yesterday (first mentioned in the BbWorld Monitoring post). Most of the demo, I spent trying to figure out the name Jeff Goldblum as one of team giving the demo had the voice and mannerisms of the actor’s characters. Had he mentioned a butterfly, then I definitely would have clapped. The other reminded me of John Hodgman.

Something I had not noticed at the time, but a reoccurring point of having Truesight is to tell our users, “Here is evidence the problem is on your end and not ours.” This assumes the users are rational or will even believe the evidence. They wish the problem never occurred (preference) and a resolution (secondarily). Preventing every problem, especially issues outside our domain, probably is outside the scope of the budget we receive. So, we are left with resolving the issues. Especially scary are the users who take evidence the problem is on their end or their ISP’s end to mean, “This is all your fault.”

Resolutions we can we offer are:

  1. Hardware change – We can replace or alter the configuration of the hardware components of the network, storage, database, or application.
  2. Software change – We can alter the configuration of the software components of the network, storage, database, or application.
  3. Request a code change from a vendor – We can work with our vendors to get a code change. These take forever to implement.
  4. Suggest a user resolve the issue
    1. We can provide a work around (grudgingly accepted, remember the preferred wish is the problem never occurred).
    2. We suggest configuration changes the user can make to resolve the problem.

Truesight provides us information to help us try to resolve issues. Describing the information provided as “facts” was a nice touch. At Valdosta State, I gave up on users reporting the browsers accurately and captured the information from the User-Agent header. Similarly, at the USG, I’ve found users disagree ~30% of the time about the version of the browser according to the User-Agent string. Heck, they have errors in the name of the class ~40% of the time. My favorite is something took 15 minutes, but all I could find was it took four minutes. Ugh. Because Truesight is capturing the header info, it ought to be much easier to confirm what users were doing and where problems occurred more accurately than the users can describe.

After receiving all the “facts”, we still have to determine the cause. Truesight helps us understand the scope of the problem by how many users, how many web servers, and how many pages are affected by slowness to what degree. As a DBA and administrator, my job identifying cause ought to be easier, though quantifying how much easier probably is difficult to say.

Part of why: (Mostly speculation.) Problems identified as a spike in anything other than “Host” are external causes. These are causes in front of the device. Causes behind the device are “Host”. If these were more narrowly broken down, the maybe we could better determine cause. That would require knowledge web browsers typically would not know like the server processing time, query processing time, or even the health of the servers.

tag: Blackboard Inc, Coradiant, , user agent,

False Panacea

I ran across Jon Udell’s post on The once and future university which pointed to Mike Caulfield’s post with the video (Transcript).

Technology, I think, is a false Panacea. The role of information technology is to better aggregate information for whatever it is we do. Such aggregation draws disparate sources together, but the sources fail to fit together well which makes work with them more challenging. True, higher education in general lags behind by years, but there are individuals taking these new technologies and applying them to teaching. Not every technology helps students to learn just by using it. A DVD player, for instance, requires an educator to determine when to use it: what materials are applicable to the class, which students need to see it, are the students ready to comprehend the content, etc. Its not, “Oh, there is a DVD player in the classroom, so lets play anything.”

You might be thinking I am a Luddite. These kids were only online 3.5 hours a day. I am online 8+ hours a day including weekends! We like technology because it can be very useful. The students writes thousands of emails a years. Great! Now, what did they learn out of those emails? I’ve taken an email based class and boy was I confused by the end. Of all the classes I still refer to this day, that class is never one of them. Of course, I can say the same of many email discussions I am involved to this day.

There is no single piece of technology by which everyone will benefit 100% information comprehension in every use. Some people find the same piece intuitive while others will become bogged down by frustration in the lack of usability. I suspect part of this is in how people learn. I learned a long time ago, there were people I could email a set of directions describing what to do and they could do it. Others might need screen shots. Others might need someone over the phone or face-to-face speaking words about what to do. Some required doing it right that instant so the motor action of each click would become ingrained. So many disparate ways to comprehend creates a need for the same information to exist in many different forms.

The teaching assistant or professor lecturing on a topic adequately meets the needs for some students. Its been ironic to me educators and Educational Psychologists have been studying this for years and implementing fantastic solutions in K-12 classrooms, but in universities these solutions barely make traction. I have faith they will. Technical schools, private colleges, and professional education institutes make use of the solutions. Retention has become an important measure of university success. Universities have responded by attempting to fix everything but the ways content is learned. As students fail out of the universities and find success with these higher education alternative, these students the universities failed will have children whom they encourage to find an alternative.