In my PhD I did a similar thing, but I worked in Smalltalk and integrated my Prolog to eliminate this kind of overhead (I talked directly to my Smalltalk image instead of having to create a separate repository). I don't know if that's possible in your approach. For what I needed to do this was needed and sufficient (the integration served other purposes as weel, on which I'm not going to dwell right now :-) ).
My colleague, however, was doing work with checking software architectures against the implementation, and for that my logic language was too slow. We then used a 'regular Prolog (WinProlog at that time), and an ODBC connection to a relational database where we stored the full parse tree information. I think that this setup could be very useful in your approach.
More information can be found in my dissertation (http://www.iam.unibe.ch/~wuyts/ARTICLES/WuytsPhd.pdf) or in Kim Mens' dissertation (http://prog.vub.ac.be/Research/ResearchPublicationsDetail2.asp?paperID=82)
On woensdag, december 12, 2001, at 12:51 , Dupont, Michael wrote:
Dear Ciao Users,
I have been experimenting with using prolog as a repository for program meta information of c++ programs. Concretly i have exported the parse trees from the gcc compiler into XML, translated them into a prolog program using xslt and then read them into by compiling the program.
I have the following predicate structure node(id,type) node(id,attribute,value) node(fromid,relationship,toid)
The problem that i have is following, the number of nodes is maxing out the system (100,000-1,000,000 nodes with 5-25 relationships per node).
Can you tell me of ways to use prolog, but to use it on datasets that are large? Specifically i have would like to using prolog to query the dataset and write queries.
Thanks in advance,
mike
Roel Wuyts Software Composition Group roel.wuyts(a)iam.unibe.ch University of Bern, Switzerland http://www.iam.unibe.ch/~wuyts/ Board Member of the European Smalltalk User Group: www.esug.org