subscribe to this blog

CubicWeb Blog

News about the framework and its uses.

show 126 results
  • Follow up of IRI conference about Museums and the Web #museoweb

    2012/04/12 by Arthur Lutz

    I attented the conference organised by IRI in a series of conferences about "Muséologie, muséographie et nouvelles formes d’adresse au public" (hashtag #museoweb). This particular occurence was about "Le Web devient audiovisuel" (the web is also audio and video content). Here are a few notes and links we gathered. The event was organised by Alexandre Monnin @aamonnz.

    http://polemictweet.com/2011-2012-museo-audiovisuel/images/slide4_museo_fr.png

    Yves Raimond from the BBC

    Yves Raimond @moustaki made a presentation about his work at the BBC around semantic web technologies and speech recognition over large quantities of digitized archives. Parts of the BCC web sites use semantic web data as the database and do mashups with external sources of data (musicbrainz, dbpedia, wikipedia). For example Tom Waits has an html web page : http://www.bbc.co.uk/music/artists/c3aeb863-7b26-4388-94e8-5a240f2be21b add .rdf at the end of the URL http://www.bbc.co.uk/music/artists/c3aeb863-7b26-4388-94e8-5a240f2be21b.rdf

    He also made an introduction about the ABC-IP The Automatic Broadcast Content Interlinking Project and the Kiwi-API project that uses CMU Sphinx on Amazon Web Services to process large quantities of archives. A screenshot of Kiwi-API is shown on the BBC R&D blog. The code should be open sourced soon and should appear on the BBC R&D github page.

    Following his presentation, the question was asked if using Wikipedia content on an institutional web site would be possible in France, I pointed to the use of Wikipedia on http://data.bnf.fr , for example at the bottom of the Victor Hugo page.

    Raphaël Troncy about Media Fragments

    Raphaël Troncy @rtroncy made a presentation about "Media Fragments" which will enable sharing parts of a video on the web. Two major features : the sharing of specific extracts and the optimization of bandwith use when streaming the extract (usefull for mobile devices for example). It is a W3C working draft : http://www.w3.org/TR/media-frags-reqs/. Here are a few links of demos and players :

    Part of the presentation was about the ACAV project done jointly with Dailymotion : http://www.capdigital.com/projet-acav/

    The slides of his presentation are available here : http://www.slideshare.net/troncy/addressing-and-annotating-multimedia-fragments

    IRI presentation

    Vincent Puig @vincentpuig and Raphaël Velt @raphv made a presentation of various projects led by IRI :

    http://www.iri.centrepompidou.fr/wp-content/themes/IRI-Theme/images/logo-iri-petit_fr_fr.png

    Final words

    The technologies seen during this conference are often related to semantic web technologies or at least web standards. Some of the visualizations are quite impressive and could mean new uses of the Web and an inspiration for CubicWeb projects.

    A few of the people present at the conference will be attending or presenting talks at SemWeb.Pro which will take place in Paris on the 2nd and 3rd of may 2012.


  • CubicWeb Sprint report for the "BugSquash" team

    2012/03/16 by Nicolas Chauvat

    Beginners fixed core bugs

    The first day of the CubicWeb sprint was dedicated to an introduction to a group of four beginners that included two people that do not work at Logilab. At the end of day, this team knew about Entity, Views and Schema and was ready to dive into the core in order to squash some bugs.

    The first steps into the CubicWeb core were not so easy, but these brave beginners, assisted by a skilled developer, managed to fix some bugs and add a few useful features, including one from a windows user that made it into the stable branch.

    The gen-static-datadir command

    We had a look at cubicweb-ctl gen-static-datadir, a feature that copies in a directory all the files that could be cached by a "front" web server instead of being served by cubicweb.

    Testing the feature

    At first run, we found that not all files where copied. We alas were unable to reproduce. So we need to keep an eye on this. On next tests, we tried several configuration. The files that were copied were always the ones containd in the "deepest" cube in the tree of cubes. So we can say that the command is working well.

    Approach used by the feature

    In the code, we browse all cubes used by the master cube to gather all filenames that we want to copy and afterwards we use "config.locate_resource(resource)" to find the best location for this file.

    Doing this, we sometimes copy a file from the cache. If we do not want to use the cache, we could be sort the cubes recursively copy the whole data folder and sometimes overwrite files with files located nearer to the master cube.

    New option

    We added a -r option that erases the target directory before launching the command.


  • Undoing changes in CubicWeb

    2012/02/29 by Anthony Truchet

    Many desktop applications offer the possibility for the user to undo the recent changes : a similar undo feature has now been integrated into the CubicWeb framework.

    Because a semantic web application and a common desktop application are not the same thing at all, especially as far as undoing is concerned, we will first introduce what is the undo feature for now.

    What's undoing in a CubicWeb application

    A CubicWeb application acts upon an Entity-Relationship model, described by a schema. This ensures some data integrity properties. It also implies that changes are made by group called transaction : so as to insure the data integrity the transaction is completely applied or none of it is applied. What may appear as a simple atomic action to a user can actually consist in several actions for the framework. The end-user has no need to know the details of all actions in those transactions. Only the so-called public actions will appear in the description of the an undoable transaction.

    Lets take a simple example: posting a "comment" for a blog entry will create the entity itself and the link to the blog entry.

    The undo feature for CubicWeb end-users

    For now there are two ways to access the undo feature when it has been activated in the instance configuration file with the option undo-support=yes. Immediately after having done something the undo** link appears in the "creation" message.

    Screenshot of the undo link in the message

    Otherwise, one can access at any time the undo-history view accessible from the start-up page.

    Screenshot of the undo link in the message

    This view shows the transactions, and each provides its own undo link. Only the transactions the user has permissions to see and undo will be shown.

    Screenshot of the **undo** link in the message

    If the user attempts to undo a transaction which can't be undone or whose undoing fails, then a message will explain the situation and no partial undoing will be left behind.

    What's next

    The undo feature is functional but the interface and configuration options are quite limited. One major, planned, improvement would be enable the user to filter which transactions or actions he sees in the undo-history view. Another critical improvement would be to selectively enable the undo feature on part of the entity-relationship schema to avoid storing too much data and reduce the underlying overhead.

    Feedback on this undo feature for specific CubicWeb applications is welcome. More detailed information regarding the undo feature will be published in the CubicWeb book when the patches make it through the review process.


  • CubicWeb Sprint report for the "ZMQ" team

    2012/02/27 by Julien Cristau

    There has been a growing interest in ZMQ in the past months, due to its ability to efficiently deal with message passing, while being light and robust. We have worked on introducing ZMQ in the CubicWeb framework for various uses :

    • As a replacement/alternative to the Pyro source, that is used to connect to distant instances. ZMQ may be used as a lighter and more efficient alternative to Pyro. The main idea here is to use the send_pyobj/recv_pyobj API of PyZMQ (python wrapper of ZMQ) to execute methods on the distant Repository in a totally transparent way for CubicWeb.
    http://www.cubicweb.org/file/2219158?vid=download
    • As a JSONServer. Indeed, ZMQ could be used to share data between a server and any requests done through ZMQ. The request is just a string of RQL, and the response is the result set formatted in Json.
    • As the building block for a simple notification (publish/subscribe) system between CubicWeb instances. A component can register its interest in a particular topic, and receive a callback whenever a corresponding message is received. At this point, this mechanism is used in CubicWeb to notify other instances that they should invalidate their caches when an entity is deleted.

  • CubicWeb Sprint report for the "WSGI" team

    2012/02/20 by Pierre-Yves David

    Cubicweb has had WSGI support for several years, but this support was incomplete.

    The WSGI team was in charge of turning WSGI support into a full featured backend that could replace Twisted in real production scenarii.

    Because we only had first class support for Twisted, some of the CubicWeb logic related to HTTP handling was implemented on the twisted side with twisted concepts. Our first task was to move this logic in CubicWeb itself. The handling of HTTP status in our response was improved in the process.

    Our second task was to focus on the "non-HTTP" part of CubicWeb (because the repository also manages background tasks). The developement mode for WSGI is now able to handle and run such tasks. For this purpose we have begun a process that aims to remove server related code from the repository object.

    We also Tested several WSGI middleware. One of the most promising is Firepython, integrating python logging and debugging feature with Firebug. werkzeug debugger seems neat too.

    http://www.cubicweb.org/file/2194267?vid=download

    All these improvements open the road to a simple and efficient multi-process architecture in CubicWeb.


  • CubicWeb Sprint report for the "Benchmarks" team

    2012/02/17 by Arthur Lutz

    One team during the CubicWeb sprint looked at issues around monitoring benchmark values for CubicWeb development. This is a huge task, so we tried to stay focused on a few aspects:

    • production reponse times (using tools such as smokeping and munin)
    • response times of test executions in continuous integration tests
    • response times of test instances runinng in continuous integration

    We looked at using cpu.clock() instead of cpu.time() in the xunit files that report test results so as to be a bit more independent of the load of the machine (but subprocesses won't be counted for).

    Graphing test times in hudson/jenkins already exists (/job/PROJECT/BUILDID/testReport/history/?) and can also be graphed by TestClass and by individual test. What is missing so far is a specific dashboard were one could select the significant graphs to look at.

    By the end of the first day we had a "lorem ipsum" test instance that is created on the fly on each hudson/jenkins build and a jmeter bench running on it, it's results processed by the performance plugin.

    http://www.cubicweb.org/file/2184036?vid=download

    By the end of the second day we had some visualisation of existing data collected by apycot using jqplot javascript visulation (cubicweb-jqplot):

    http://www.cubicweb.org/file/2184035?vid=download

    By the end of the sprint, we got patches submitted for the following cubes :

    • apycot
    • cubicweb-jqplot
    • the original jqplot library (update : patch accepted a few days later)

    On the last hour of the sprint, since we had a "lorem ipsum" test application running each time the tests went through the continuous integration, we hacked up a proof of concept to get automatic screenshots of this temporary test application. So far, we get screenshots for firefox only, but it opens up possibilities for other browsers. Inspiration could be drawn from https://browsershots.org/


  • "Data Fast-food": quick interactive exploratory processing and visualization of complex datasets with CubicWeb

    2012/01/19 by Vincent Michel

    With the emergence of the semantic web in the past few years, and the increasing number of high quality open data sets (cf the lod diagram), there is a growing interest in frameworks that allow to store/query/process/mine/visualize large data sets.

    We have seen in previous blog posts how CubicWeb may be used as an efficient knowledge management system for various types of data, and how it may be used to perform complex queries. In this post, we will see, using Geonames data, how CubicWeb may perform simple or complex data mining and machine learning procedures on data, using the datamining cube. This cube adds powerful tools to CubicWeb that make it easy to interactively process and visualize datasets.

    At this point, it is not meant to be used on massive datasets, for it is not fully optimized yet. If you try to perform a TF-IDF (term frequency–inverse document frequency) with a hierarchical clustering on the full dbpedia abstracts dataset, be prepared to wait. But it is a promising way to enrich the user experience while playing with different datasets, for quick interactive exploratory datamining processing (what I've called the "Data fast-food"). This cube is based on the scikit-learn toolbox that has recently gained a huge popularity in the machine learning and Python community. The release of this cube drastically increases the interest of CubicWeb for data management.

    The Datamining cube

    For a given query, similarly to SQL, CubicWeb returns a result set. This result set may be presented by a view to display a table, a map, a graph, etc (see documentation and previous blog posts).

    The datamining cube introduces the possibility to process the result set before presenting it, for example to apply machine learning algorithms to cluster the data.

    The datamining cube is based on two concepts:

    • the concept of processor: basically, a processor transforms a result set in a numpy array, given some criteria defining the mathematical processing, and the columns/rows of the result set to be taken into account. The numpy-array is a polyvalent structure that is widely used for numerical computation. This array could thus be efficiently used with any kind of datamining algorithms. Note that, in our context of knowledge management, it is more convenient to return a numpy array with additional meta-information, such as indices or labels, the result being stored in what we call a cw-array. Meta-information may be useful for display, but is not compulsory.
    • the concept of array-view: the "views" are basic components of CubicWeb, distinguish querying and displaying the data is key in this framework. So, on a given result set, many different views can be applied. In the datamining cube, we simply overload the basic view of CubicWeb, so that it works with cw-array instead of result sets. These array-views are associated to some machine learning or datamining processes. For example, one can apply the k-means (clustering process) view on a given cw-array.

    A very important feature is that the processor and the array-view are called directly through the URL using the two related parameters arid (for ARray ID) and vid (for View ID, standard in CubicWeb).

    http://www.cubicweb.org/file/2154793?vid=download

    Processors

    We give some examples of basic processors that may be found in the datamining cube:

    • AttributesAsFloatArrayProcessor (arid='attr-asfloat'): This processor turns all Int, BigInt and Float attributes in the result set to floats, and returns the corresponding array. The number of rows is equal to the number of rows in the result set, and the number of columns is equal to the number of convertible attributes in the result set.
    • EntityAsFloatArrayProcessor (arid='entity-asfloat'): This processor performs similarly to the AttributesAsFloatArrayProcessor, but keeps the reference to the entities used to create the numpy-array. Thus, this information could be used for display (map, label, ...).
    • AttributesAsTokenArrayProcessor (arid='attr-astoken'): This processor turns all String attributes in the result set in a numpy array, based on a Word-n-gram analyze. This may be used to tokenize a set of strings.
    • PivotTableCountArrayProcessor (arid='pivot-table-count'): This processor is used to create a pivot table, with a count function. Other functions, such as sum or product also exist. This may be used to create some spreadsheet-like views.
    • UndirectedRelationArrayProcessor (arid='undirected-rel'): This processor creates a binary numpy array of dimension (nb_entities, nb_entities), that represents the relations (or corelations) between entities. This may be used for graph-based vizualisation.

    We are also planning to extend the concept of processor to sparse matrix (scipy.sparse), in order to deal with very high dimensional data.

    Array Views

    The array views that are found in the datamining cube, are, for most of them, used for simple visualization. We used HTML-based templates and the Protovis Javascript Library.

    We will not detail all the views, but rather show some examples. Read the reference documentation for a complete and detailed description.

    Examples on numerical data

    Histogram

    The request:

    Any LO, LA WHERE X latitude LA, NOT X latitude NULL, X longitude LO,  NOT X longitude NULL,
    X country C, NOT X elevation NULL, C name "France"
    

    that may be translated as:

    All couples (latitude, longitude) of the locations in France, with an elevation not null
    

    and, using vid=protovis-hist and arid=attr-asfloat

    http://www.cubicweb.org/file/2154795?vid=download

    Scatter plot

    Using the notion of view, we can display differently the same result set, for example using a scatter plot (vid=protovis-scatterplot).

    http://www.cubicweb.org/file/2156233?vid=download

    Another example with the request:

    Any P, E WHERE X is Location, X elevation E, X elevation >1, X population P,
    X population >10, X country CO, CO name "France"
    

    that may be translated as:

    All couples (population, elevation) of locations in France,
    with a population higher than 10 (inhabitants),and an elevation higher than 1 (meter)
    

    and, using the same vid (vid=protovis-scatterplot) and the same arid (arid=attr-asfloat)

    http://www.cubicweb.org/file/2154802?vid=download

    If a third column is given in the result set (and thus in the numpy array), it will be encoded in the size/color of each dot of the scatter plot. For example with the request:

    Any LO, LA, E WHERE X latitude LA, NOT X latitude NULL, X longitude LO,  NOT X longitude NULL,
    X country C, NOT X elevation NULL, X elevation E, C name "France"
    

    that may be translated as:

    All tuples (latitude, longitude, elevation) of the locations in France, with an elevation not null
    

    and, using the same vid (vid=protovis-scatterplot) and the same arid (arid=attr-asfloat), we can visualize the elevation on a map, encoded in size/color

    http://www.cubicweb.org/file/2154805?vid=download

    Another example with the request:

    Any LO, LA LIMIT 50000 WHERE X is Location, X population  >1000, X latitude LA, X longitude LO,
    X country CO, CO name "France"
    

    that may be translated as:

    All couples (latitude, longitude) of 50000 locations in France, with a population higher than 100 (inhabitants)
    
    http://www.cubicweb.org/file/2156095?vid=download

    There also exist some AreaChart view, LineArray view, ...

    Examples on relational data

    Relational Matrix (undirected graph)

    The request:

    Any X,Y WHERE X continent CO, CO name "North America", X neighbour_of Y
    

    that may be translated as:

    All neighbour countries in North America
    

    and using the vid='protovis-binarymap' and arid='undirected-rel'

    http://www.cubicweb.org/file/2154796?vid=download

    Relational Matrix (directed graph)

    If we do not want a symmetric matrix, i.e. if we want to keep the direction of a link (X,Y is not the same relation as Y,X), we can use the directed*rel array processor. For example, with the following request:

    Any X,Y LIMIT 20 WHERE X continent Y
    

    that may be translated as:

    20 countries and their continent
    

    and using the vid='protovis-binarymap' and arid='directed-rel'

    http://www.cubicweb.org/file/2154797?vid=download

    Force directed graph

    For a dynamic representation of relations, we can use a force directed graph. The request:

    Any X,Y WHERE X neighbour_of Y
    

    that may be translated as:

    All neighbour countries in the World.
    

    and using the vid='protovis-forcedirected' and arid='undirected-rel', we can see the full graph, with small independent components (e.g. UK and Ireland)

    http://www.cubicweb.org/file/2154800?vid=download

    Again, a third column in the result set could be used to encode some labeling information, for example the continent.

    The request:

    Any X,Y,CO WHERE X neighbour_of Y, X continent CO
    

    that may be translated as:

    All neighbour countries in the World, and their corresponding continent.
    

    and again, using the vid='protovis-forcedirected' and arid='undirected-rel', we can see the full graph with the continents encoded in color (Americas in green, Africa in dark blue, ...)

    http://www.cubicweb.org/file/2154801?vid=download

    Dendrogram

    For hierarchical information, one can use the Dendrogram view. For example, with the request:

    Any X,Y WHERE X continent Y
    

    that may be translated as:

    All couple (country, continent) in the World
    

    and using vid='protovis-dendrogram' and arid='directed-rel', we have the following dendrogram (we only show a part due to lack of space)

    http://www.cubicweb.org/file/2154806?vid=download

    Unsupervised Learning

    We have also developed some machine learning view for unsupervised learning. This is more a proof of concept than a fully optimized development, but we can already do some cool stuff. Each machine learning processing is referenced by a mlid. For example, with the request:

    Any LO, LA WHERE X is Location, X elevation E, X elevation >1, X latitude LA, X longitude LO,
    X country CO, CO name "France"
    

    that may be translated as:

    All couples (latitude, longitude) of the locations in France, with an elevation higher than 1
    

    and using vid='protovis-scatterplot' arid='attr-asfloat' and mlid='kmeans', we can construct a scatter plot of all couples of latitude and longitude in France, and create 10 clusters using the kmeans clustering. The labeling information is thus encoded in color/size:

    http://www.cubicweb.org/file/2154804?vid=download

    Download

    Finally, we have also implement a download view, based on the Pickle of the numpy-array. It is thus possible to access remotely any data within a Python shell, allowing to process them as you want. Changing the request can be done very easily by changing the rql parameter in the URL. For example:

    import pickle, urllib
    data = pickle.loads(urllib.open('http://mydomain?rql=my request&vid=array-numpy&arid=attr-asfloat'))
    

  • CubicWeb sprint in Paris - 2012/02/07-10

    2011/12/21 by Nicolas Chauvat

    Topics

    To be decided. Some possible topics are :

    • optimization (still)
    • porting cubicweb to python3
    • porting cubicweb to pypy
    • persistent sessions
    • finish twisted / wsgi refactoring
    • inter-instance communication bus
    • use subprocesses to handle datafeeds
    • developing more debug-tools (debug console, view profiling, etc.)
    • pluggable / unpluggable external sources (as needed for the cubipedia and semantic family)
    • client-side only applications (javascript + http)
    • mercurial storage backend: see this thread of the mailing list
    • mercurial-server integration: see this email to the mailing list

    other ideas are welcome, please bring them up on cubicweb@lists.cubicweb.org

    Location

    This sprint will take place from in february 2012 from tuesday the 7th to friday the 10th. You are more than welcome to come along, help out and contribute. An introduction is planned for newcomers.

    Network resources will be available for those bringing laptops.

    Address : 104 Boulevard Auguste-Blanqui, Paris. Ring "Logilab" (googlemap)

    Metro : Glacière

    Contact : http://www.logilab.fr/contact

    Dates : 07/02/2012 to 10/02/2012


  • Geonames in CubicWeb !

    2011/12/14 by Vincent Michel

    CubicWeb is a semantic web framework written in Python that has been succesfully used in large-scale projects, such as data.bnf.fr (French National Library's opendata) or Collections des musées de Haute-Normandie (museums of Haute-Normandie).

    CubicWeb provides a high-level query language, called RQL, operating over a relational database (PostgreSQL in our case), and allows to quickly instantiate an entity-relationship data-model. By separating in two distinct steps the query and the display of data, it provides powerful means for data retrieval and processing.

    In this blog, we will demonstrate some of these capabilities on the Geonames data.

    Geonames

    Geonames is an open-source compilation of geographical data from various sources:

    "...The GeoNames geographical database covers all countries and contains over eight million placenames that are available for download free of charge..." (http://www.geonames.org)

    The data is available as a dump containing different CSV files:

    • allCountries: main file containing information about 8,000,000 places in the world. We won't detail the various attributes of each location, but we will focus on some important properties, such as population and elevation. Moreover, admin_code_1 and admin_code_2 will be used to link the different locations to the corresponding AdministrativeRegion, and feature_code will be used to link the data to the corresponding type.
    • admin1CodesASCII.txt and admin2Codes.txt detail the different administrative regions, that are parts of the world such as region (Ile-de-France), department (Department of Yvelines), US counties...
    • featureCodes.txt details the different types of location that may be found in the data, such as forest(s), first-order administrative division, aqueduct, research institute, ...
    • timeZones.txt, countryInfo.txt, iso-languagecodes.txt are additional files prodividing information about timezones, countries and languages. They will be included in our CubicWeb database but won't be explained in more details here.

    The Geonames website also provides some ways to browse the data: by Countries, by Largest Cities, by Highest mountains, by postal codes, etc. We will see that CubicWeb could be used to automatically create such ways of browsing data while allowing far deeper queries. There are two main challenges when dealing with such data:

    • the number of entries: with 8,000,000 placenames, we have to use efficient tools for storing and querying them.
    • the structure of the data: the different types of entries are separated in different files, but should be merged for efficient queries (i.e. we have to rebuild the different links between entities, e.g Location to Country or Location to AdministrativeRegion).

    Data model

    With CubicWeb, the data model of the application is written in Python. It defines different entity classes with their attributes, as well as the relationships between the different entity classes. Here is a sample of the schema.py that we have used for Geonames data:

    class Location(EntityType):
        name = String(maxsize=1024, indexed=True)
        uri = String(unique=True, indexed=True)
        geonameid = Int(indexed=True)
        latitude = Float(indexed=True)
        longitude = Float(indexed=True)
        feature_code = SubjectRelation('FeatureCode', cardinality='?*', inlined=True)
        country = SubjectRelation('Country', cardinality='?*', inlined=True)
        main_administrative_region = SubjectRelation('AdministrativeRegion',
                                  cardinality='?*', inlined=True)
        timezone = SubjectRelation('TimeZone', cardinality='?*', inlined=True)
        ...
    

    This indicates that the main Location class has a name attribute (string), an uri (string), a geonameid (integer), a latitude and a longitude (both floats), and some relation to other entity classes such as FeatureCode (the relation is named feature_code), Country (the relation is named country), or AdministrativeRegion called main_administrative_region.

    The cardinality of each relation is classically defined in a similar way as RDBMS, where * means any number, ? means zero or one and 1 means one and only one.

    We give below a visualisation of the schema (obtained using the /schema relative url)

    http://www.cubicweb.org/file/2124618?vid=download

    Import

    The data contained in the CSV files could be pushed and stored without any processing, but it is interesting to reconstruct the relations that may exist between different entities and entity classes, so that queries will be easier and faster.

    Executing the import procedure took us 80 minutes on regular hardware, which seems very reasonable given the amount of data (~7,000,000 entities, 920MB for the allCountries.txt file), and the fact that we are also constructing many indexes (on attributes or on relations) to improve the queries. This import procedure uses some low-level SQL commands to load the data into the underlying relational database.

    Queries and views

    As stated before, queries are performed in CubicWeb using RQL (Relational Query Language), which is similar to SPARQL, but with a syntax that is closer to SQL. This language may be used to query directly the concepts while abstracting the physical structure of the underlying database. For example, one can use the following request:

    Any X LIMIT 10 WHERE X is Location, X population > 1000000,
        X country C, C name "France"
    

    that means:

    Give me 10 locations that have a population greater than 1000000, and that are in a country named "France"

    The corresponding SQL query is:

    SELECT _X.cw_eid FROM cw_Country AS _C, cw_Location AS _X
    WHERE _X.cw_population>1000000
          AND _X.cw_country=_C.cw_eid AND _C.cw_name="France"
    LIMIT 10
    

    We can see that RQL is higher-level than SQL and abstracts the details of the tables and the joins.

    A query returns a result set (a list of results), that can be displayed using views. A main feature of CubicWeb is to separate the two steps of querying the data and displaying the results. One can query some data and visualize the results in the standard web framework, download them in different formats (JSON, RDF, CSV,...), or display them in some specific view developed in Python.

    In particular, we will use the mapstraction.map which is based on the Mapstraction and the OpenLayers libraries to display information on maps using data from OpenStreetMap. This mapstraction.map view uses a feature of CubicWeb called adapter. An adapter adapts a class of entity to some interface, hence views can rely on interfaces instead of types and be able to display entities with different attributes and relations. In our case, the IGeocodableAdapter returns a latitude and a longitude for a given class of entity (here, the mapping is trivial, but there are more complex cases... :) ):

    class IGeocodableAdapter(EntityAdapter):
          __regid__ = 'IGeocodable'
          __select__ = is_instance('Location')
          @property
          def latitude(self):
              return self.entity.latitude
          @property
          def longitude(self):
              return self.entity.longitude
    

    We will give some results of queries and views later. It is important to notice that the following screenshoots are taken without any modification of the standard web interface of CubicWeb. It is possible to write specific views and to define a specific CSS, but we only wanted to show how CubicWeb could handle such data. However, the default web template of CubicWeb is sufficient for what we want to do, as it dynamically creates web pages showing attributes and relations, as well as some specific forms and javascript applets adapted directly to the data (e.g. map-based tools). Last but not least, the query and the view could be defined within the url, and thus open a world of new possibilities to the user:

    http://baseurl:port/?rql=The query that I want&vid=Identifier-of-the-view
    

    Facets

    We will not get into too much details about Facets, but let's just say that this feature may be used to determine some filtering axis on the data, and thus may be used to post-filter a result set. In this example, we have defined four different facets: on the population, on the elevation, one the feature_code and one the main_administrative_region. We will see illustration of these facets below.

    We give here an example of the definition of a Facet:

    class LocationPopulationFacet(facet.RangeFacet):
        __regid__ = 'population-facet'
        __select__ = is_instance('Location')
        order = 2
        rtype = 'population'
    

    where __select__ defines which class(es) of entities are targeted by this facet, order defines the order of display of the different facets, and rtype defines the target attribute/relation that will be used for filtering.

    Geonames in CubicWeb

    The main page of the Geoname application is illustrated in the screenshot below. It provides general information on the database, in particular the number of entities in the different classes:

    • 7,984,330 locations.
    • 59,201 administrative regions (e.g. regions, counties, departments...)
    • 7,766 languages.
    • 656 features (e.g. types of location).
    • 410 time zones.
    • 252 countries.
    • 7 continents.
    http://www.cubicweb.org/file/2124617?vid=download

    Simple query

    We will first illustrate the possibilites of CubicWeb with the simple query that we have detailed before (that could be directly pasted in the url...):

    Any X LIMIT 10 WHERE X is Location, X population > 1000000,
        X country C, C name "France"
    

    We obtain the following page:

    http://www.cubicweb.org/file/2124615?vid=download

    This is the standard view of CubicWeb for displaying results. We can see (right box) that we obtain 10 locations that are indeed located in France, with a population of more than 1,000,000 inhabitants. The left box shows the search panel that could be used to launch queries, and the facet filters that may be used for filtering results, e.g. we may ask to keep only results with a population greater than 4,767,709 inhabitants within the previous results:

    http://www.cubicweb.org/file/2124616?vid=download

    and we obtain now only 4 results. We can also notice that the facets are linked: by restricting the result set using the population facet, the other facets also restricted their possibilities.

    Simple query (but with more information !)

    Let's say that we now want more information about the results that we have obtained previously (for example the exact population, the elevation and the name). This is really simple ! We just have to ask within the RQL query what we want (of course, the names N, P, E of the variables could be almost anything...):

    Any N, P, E LIMIT 10 WHERE X is Location,
        X population P, X population > 1000000,
        X elevation E, X name N, X country C, C name "France"
    
    http://www.cubicweb.org/file/2124619?vid=download

    The empty column for the elevation simply means that we don't have any information about elevation.

    Anyway, we can see that fetching particular information could not be simpler! Indeed, with more complex queries, we can access countless information from the Geonames database:

    Any N,E,LA,LO ORDERBY E DESC LIMIT 10  WHERE X is Location,
          X latitude LA, X longitude LO,
          X elevation E, NOT X elevation NULL, X name N,
          X country C, C name "France"
    

    which means:

    Give me the 10 highest locations (the 10 first when sorting by decreasing elevation) with their name, elevation, latitude and longitude that are in a country named "France"
    http://www.cubicweb.org/file/2124626?vid=download

    We can now use another view on the same request, e.g. on a map (view mapstraction.map):

    Any X ORDERBY E DESC LIMIT 10  WHERE X is Location,
           X latitude LA, X longitude LO, X elevation E,
           NOT X elevation NULL, X country C, C name "France"
    
    http://www.cubicweb.org/file/2124631?vid=download

    And now, we can add the fact that we want more results (20), and that the location should have a non-null population:

    Any N, E, P, LA, LO ORDERBY E DESC LIMIT 20  WHERE X is Location,
           X latitude LA, X longitude LO,
           X elevation E, NOT X elevation NULL, X population P,
           X population > 0, X name N, X country C, C name "France"
    
    http://www.cubicweb.org/file/2124632?vid=download

    ... and on a map ...

    http://www.cubicweb.org/file/2124633?vid=download

    Conclusion

    In this blog, we have seen how CubicWeb could be used to store and query complex data, while providing (among other...) Web-based views for data vizualisation. It allows the user to directly query data within the URL and may be used to interact with and explore the data in depth. In a next blog, we will give more complex queries to show the full possibilities of the system.


  • Importing thousands of entities into CubicWeb within a few seconds with dataimport

    2011/12/09 by Adrien Di Mascio

    In most cubicweb projects I've been developing on, there always comes a time where I need to import legacy data in the new application. CubicWeb provides Store and Controller objects in the dataimport module. I won't talk here about the recommended general procedure described in the module's docstring (I find it a bit convoluted for simple cases) but I will focus on Store objects. Store objects in this module are more or less a thin layer around session objects, they provide high-level helpers such as create_entity(), relate() and keep track of what was inserted, errors occurred, etc.

    In a recent project, I had to create a somewhat fair amount (a few million) of simple entities (strings, integers, floats and dates) and relations. Default object store (i.e. cubicweb.dataimport.RQLObjectStore) is painfully slow, the reason being all integrity / security / metadata hooks that are constantly selected and executed. For large imports, dataimport also provides the cubicweb.dataimport.NoHookRQLObjectStore. This store bypasses all hooks and uses the underlying system source primitives directly, making it around two-times faster than the standard store. The problem is that we're still doing each sql query sequentially and we're talking here of millions of INSERT / UPDATE queries.

    My idea was to create my own ObjectStore class inheriting from NoHookRQLObjectStore that would try to use executemany or even copy_from when possible [1]. It is actually not hard to make groups of similar SQL queries since create_entity() generates the same query for a given set of parameters. For instance:

    create_entity('Person', firstname='John', surname='Doe')
    create_entity('Person', firstname='Tim', surname='BL')
    

    will generate the following sql queries:

    INSERT INTO cw_Person ( cw_cwuri, cw_eid, cw_modification_date,
                            cw_creation_date, cw_firstname, cw_surname )
           VALUES ( %(cw_cwuri)s, %(cw_eid)s, %(cw_modification_date)s,
                    %(cw_creation_date)s, %(cw_firstname)s, %(cw_surname)s )
    INSERT INTO cw_Person ( cw_cwuri, cw_eid, cw_modification_date,
                            cw_creation_date, cw_firstname, cw_surname )
           VALUES ( %(cw_cwuri)s, %(cw_eid)s, %(cw_modification_date)s,
                    %(cw_creation_date)s, %(cw_firstname)s, %(cw_surname)s )
    

    The only thing that will differ is the actual data inserted. Well ... ahem ... CubicWeb actually also generates a "few" extra sql queries to insert metadata for each entity:

    INSERT INTO is_instance_of_relation(eid_from,eid_to) VALUES (%s,%s)
    INSERT INTO is_relation(eid_from,eid_to) VALUES (%s,%s)
    INSERT INTO cw_source_relation(eid_from,eid_to) VALUES (%s,%s)
    INSERT INTO owned_by_relation ( eid_to, eid_from ) VALUES ( %(eid_to)s, %(eid_from)s )
    INSERT INTO created_by_relation ( eid_to, eid_from ) VALUES ( %(eid_to)s, %(eid_from)s )
    

    Those extra queries are actually even exactly the same for each entity insterted, whatever the entity type is, hence craving for executemany or copy_from. Grouping together SQL queries is not that hard [2] but has a drawback : as you don't have an intermediate state (the data is actually inserted only at the very end of the process), you loose the ability to query your database to fetch the entities you've just created during the import.

    Now, a few benchmarks ...

    To create those benchmarks, I decided to use the workorder cube which is a simple cube, yet complete enough : it provides only two entity types (WorkOrder and Order), a relation between them (Order split_into WorkOrder) and uses different kind of attributes (String, Date, Float).

    Once the cube was instantiated, I ran the following script to populate the database with my 3 different stores:

    import sys
    from datetime import date
    from random import choice
    from itertools import count
    
    from logilab.common.decorators import timed
    
    from cubicweb import cwconfig
    from cubicweb.dbapi import in_memory_repo_cnx
    
    def workorders_data(n, seq=count()):
        for i in xrange(n):
            yield {'title': u'wo-title%s' % seq.next(), 'description': u'foo',
                   'begin_date': date.today(), 'end_date': date.today()}
    
    def orders_data(n, seq=count()):
        for i in xrange(n):
            yield {'title': u'o-title%s' % seq.next(), 'date': date.today(), 'budget': 0.8}
    
    def split_into(orders, workorders):
        for workorder in workorders:
            yield choice(orders), workorder
    
    def initial_state(session, etype):
        return session.execute('Any S WHERE S is State, WF initial_state S, '
                               'WF workflow_of ET, ET name %(etn)s', {'etn': etype})[0][0]
    
    
    @timed
    def populate(store, nb_workorders, nb_orders, set_state=False):
        orders = [store.create_entity('Order', **attrs)
                  for attrs in orders_data(nb_orders)]
        workorders = [store.create_entity('WorkOrder', **attrs)
                      for attrs in workorders_data(nb_workorders)]
        ## in_state is set by a hook, so NoHookObjectStore will need
        ## to set the relation manually
        if set_state:
            order_state = initial_state(store.session, 'Order')
            workorder_state = initial_state(store.session, 'WorkOrder')
            for order in orders:
                store.relate(order.eid, 'in_state', order_state)
            for workorder in workorders:
                store.relate(workorder.eid, 'in_state', workorder_state)
        for order, workorder in split_into(orders, workorders):
            store.relate(order.eid, 'split_into', workorder.eid)
        store.commit()
    
    
    if __name__ == '__main__':
        config = cwconfig.instance_configuration(sys.argv[1])
        nb_orders = int(sys.argv[2])
        nb_workorders = int(sys.argv[3])
        repo, cnx = in_memory_repo_cnx(config, login='admin', password='admin')
        session = repo._get_session(cnx.sessionid)
        from cubicweb.dataimport import RQLObjectStore, NoHookRQLObjectStore
        from cubes.mycube.dataimport.store import CopyFromRQLObjectStore
        print 'testing RQLObjectStore'
        store = RQLObjectStore(session)
        populate(store, nb_workorders, nb_orders)
        print 'testing NoHookRQLObjectStore'
        store = NoHookRQLObjectStore(session)
        populate(store, nb_workorders, nb_orders, set_state=True)
        print 'testing CopyFromRQLObjectStore'
        store = CopyFromRQLObjectStore(session)
    

    I ran the script and asked to create 100 Order entities, 1000 WorkOrder entities and to link each created WorkOrder to a parent Order

    adim@esope:~/tmp/bench_cwdi$ python bench_cwdi.py bench_cwdi 100 1000
    testing RQLObjectStore
    populate clock: 24.590000000 / time: 46.169721127
    testing NoHookRQLObjectStore
    populate clock: 8.100000000 / time: 25.712352991
    testing CopyFromRQLObjectStore
    populate clock: 0.830000000 / time: 1.180006981
    

    My interpretation of the above times is :

    • The clock time indicates the time spent on CubicWeb server side (i.e. hooks and data pre/postprocessing around SQL queries). The time time should be the sum of clock time + time spent in postgresql.
    • RQLObjectStore is slow ;-). Nothing new here, but the clock/time ratio means that we're speding a lot of time on the python side (i.e. hooks as I told earlier) and a fair amount of time in postgresql.
    • NoHookRQLObjectStore really takes down the time spent on the python side, the time in postgresql remains about the same as for RQLObjectStore, this is not surprising, queries performed are the same in both cases.
    • CopyFromRQLObjectStore seems blazingly fast in comparison (inserting a few thousands of elements in postgresql with a COPY FROM statement is not a problem). And ... yes, I checked the data was actually inserted, and I even a ran a cubicweb-ctl db-check on the instance afterwards.

    This probably opens new perspective for massive data imports since the client API remains the same as before for the programmer. It's still a bit experimental, can only be used for "dummy", brute-force import scenario where you can preprocess your data in Python before updating the database, but it's probably worth having such a store in the the dataimport module.

    [1]The idea is to promote an executemany('INSERT INTO ...', data) statement into a COPY FROM whenever possible (i.e. simple data types, easy enough to escape). In that case, the underlying database and python modules have to provide support for this functionality. For the record, the psycopg2 module exposes a copy_from() method and soon logilab-database will provide an additional high-level helper for this functionality (see this ticket).
    [2]The code will be posted later or even integrated into CubicWeb at some point. For now, it requires a bit of monkey patching around one or two methods in the source so that SQL is not executed but just recorded for later executions.

show 126 results