----- Original Message ----- From: "Dupont, Michael" <michael.dupont(a)mciworldcom.de> To: <ciao-users(a)clip.dia.fi.upm.es> Sent: Wednesday, December 12, 2001 11:51 AM Subject: Database and memory limitations
Dear Ciao Users,
I have been experimenting with using prolog as a repository for program meta information of c++ programs. Concretly i have exported the parse trees from the gcc compiler into XML, translated them into a prolog program using xslt and then read them into by compiling the program.
I have the following predicate structure node(id,type) node(id,attribute,value) node(fromid,relationship,toid)
The problem that i have is following, the number of nodes is maxing out the system (100,000-1,000,000 nodes with 5-25 relationships per node).
Can you tell me of ways to use prolog, but to use it on datasets that are large? Specifically i have would like to using prolog to query the dataset and write queries.
Michael,
Depending how you access the data, I would look at "de-normalizing" it first, e.g.
node( 1, a ). node( 1, colour, blue ). node( 1, size, big ). node( 1, above, 2 ). node( 1, next_to, 3 ).
changes into:
% de_normalized( Id, Type, AttributesAndRelationships ). de_normalized( 1, a, [colour=blue, size=big, above=1, next_to=3] ).
then:
node( Id, Type ) :- de_normalized( Id, Type, _AVs ).
and:
node( Id, A, V ) :- de_normalized( Id, _Type, AVs ), member( A=V, AVs ).
There are lots more tricks you can try, but this might get you started.
Regards
John Fletcher