Hiya Michael,
First off, I'm not an experienced Prolog coder, so my reply may well be naive, and in which case feel free to simply bin it.
When evaluating several Prologs I noted that most had configs for memory usage (heap etc), and indeed a couple of times I had to raise these settings in order to run tests... might there be a similar issue here?
I find it of interest that you are transforming xml datasets into prolog with xsl... specifically the reason your snippet caught my eye is I'm about to try out some previous work with Topic Navigation Maps with Prolog (which is new to me), well basically to see what fits well and what doesn't.
If you do find a solution I'd be appreciative as after xmas I was going to throw a large dataset at Ciao to see how it went (DMoz), and from the sounds of it the answer would be not very far.
Sorry I can't be of any substantial help, and thanks for any follow-up you're able to give.
Regards, Guy Murphy.
----- Original Message ----- From: "Dupont, Michael" <michael.dupont(a)mciworldcom.de> To: <ciao-users(a)clip.dia.fi.upm.es> Sent: Wednesday, December 12, 2001 11:51 AM Subject: Database and memory limitations
Dear Ciao Users,
I have been experimenting with using prolog as a repository for program meta information of c++ programs. Concretly i have exported the parse trees from the gcc compiler into XML, translated them into a prolog program using xslt and then read them into by compiling the program.
I have the following predicate structure node(id,type) node(id,attribute,value) node(fromid,relationship,toid)
The problem that i have is following, the number of nodes is maxing out the system (100,000-1,000,000 nodes with 5-25 relationships per node).
Can you tell me of ways to use prolog, but to use it on datasets that are large? Specifically i have would like to using prolog to query the dataset and write queries.
Thanks in advance,
mike
MCL
_____________________________________________________________ All I want is a warm bed and a kind word and unlimited power.