Thursday, January 10, 2013
Cornell extends this further with adding their own changes to VIVO as a third level. By modifying the build.xml file and the deploy.properties to include pointing at VIVO as a second layer, the build script can preform the same changes to VIVO as VIVO does to Vitro.
It seems a little complicated, but it removes you from the VIVO source allowing you to replace your VIVO target with the latest version and adapt more rapidly to new releases of VIVO. There hasn't been a study yet, but I would guess that the average time from release of a new VIVO version to an institution upgrading to that version is probably around 5 months. I know at UF it took us about that long (often waiting for the .1 release) and at CU they were still on 1.2 when I arrived (1.5 was released just before the VIVO conference in August).
I've started a wiki article on the new VIVO confluence wiki, https://wiki.duraspace.org/display/VIVO/Building+VIVO+in+3+tiers, that describes how to setup your local VIVO to run this three tier system.
Tuesday, January 8, 2013
Updating Data Through Data Ingest in VIVO 1.5.1
I've had a lot of tasks at CU since starting, and I'll go over some of the things I've learned and written soon enough. For now I wanted to talk about updating your Data in VIVO 1.5.1. In the old days, when four programmers at UF embarked on Data Ingest, if you wanted to get data into VIVO you had to add it to the systems "Main Model" known as KB2. This made data ingest difficult during the update phase because you either had to
- Start over from scratch with a blank VIVO
- remove the previous data that you ingested which contained the data you want to change
Semantic Triple Stores, didn't have a key that we could use to link a row in KB2 to the data coming in from our source (in hindsight I believe there are ways with hash keys that we should have probably done this). Due to this we constructed a very complicated (and time expensive) process to compare the data you are putting into VIVO against the last time you ingest from that source. It creates an additions and a subtractions file which you then apply against the KB2 model. Basically it was a bit like writing a remove and insert to accomplish an update in SQL.
Now in VIVO 1.5.1 this is mostly the same. However, data doesn't have to be in the main model to be index. So now we can separate our data by the source, or in the case of CU the tables that generated the data. This allows for a shorter ingest process, we're only interested in dropping and adding to the models that have changes to be made. I took CU's current process (which uses selenium scripts against the UI) and ripped out the portion that loaded data into KB2 (download of the data, use add/remove screen to load the exported data to the main graph) and I added a method to drop the import graph that we used. This dropped my time from 3-4 hours to 1 hour for an entire ingest.
We were still rebuilding from scratch with each ingest and now that we're heading to production I wanted to make this process a little faster. So I wrote the first of a couple of scripts towards automating the entire process. This first script reviews the dat files for changes, allowing me to drop only the graphs that have changes and re-run thier ingest scripts.
The process was fairly simply and I've included a couple of sites and blogs I used to figure out what to do. By hand (which will become another script soon) I copied down the data from the previous run and ran a new export and copied down the new data. I then pass to my new little script the two paths for the old data and the new data.
The first step in the script is to review all of the file names and see what is missing, what's new. Since the export process is an sql script which always uses the same file names for each of it's methods we don't have to worry misspellings, just new dat files or the lost of a dat file.
The next and final step in the process was to review the files themselves. I could have used the python difflib, but I wanted more information from the files. I wanted to know what was new and what had been removed. Plus I found a nice little reference post by a Frankie Bagnardi that I wanted to implement myself. The result has greatly increased the ability of the ingest operator (me today, probably alex and vance at other times) to make sure that the changes coming in are reflected in VIVO. For exampledef reviewFolderForChanges(oldFilePath, newFilePath):#read in file patholdFiles = os.listdir(oldFilePath)newFiles = os.listdir(newFilePath)#get names of files in order of name by create listfor newFile in newFiles:if newFile in oldFiles:reviewFileForChanges(oldFilePath+newFile, newFilePath+newFile)else:print "New File Found: " + newFilefor oldFile in oldFiles:if oldFile not in newFiles:print "File Missing: " + oldFile
Changes in file:/Users/stwi5210/Source/uccs-new-data/fis_faculty_member_positions.dat
With information like this I go to the two individuals listed and make sure that they no longer have the positions of Chair or Lecturer. This let me know that my ingest was successful and to report that it's ready to migrate to production.
All in all the script took about an hour to construct and then run and saved me about 40 minutes of ingesting. Plus I was able to review VIVO after the ingests were finished for the data that should have changed, which is a big improvement over our previous methods of review.
- Compare Two Files with Python by Frankie Bagnardi- http://aboutscript.com/blog/posts/107
- Python: iterate (and read) all files in a directory (folder) by Bogdon T - http://bogdan.org.ua/2007/08/12/python-iterate-and-read-all-files-in-a-directory-folder.html