Mike
2006-04-11 17:04:20 UTC
Hello,
I would like some comments on my design ideas regarding this process I
intend to create.
I have a c++ program , let calls it NumberCruncher that I wrote on windows.
Its not a windows program but vanilla c++ utilizing stlport containers
(maps, hash table etc). I'm running into issues where the numbers I am
calculating are bigger than the largest data size currently available on
windows thus the idea to port to 64 bit linux.
There are other issues where I run out of memory also but the main problem
is the data size. My plans are to build a 64 bit cpu machine, load 64 bit
linux on it (I'm thinking fedora) and use gcc or g++ (not sure which) to
port my program over.
That sounds pretty clear cut but I realize there will be issues here and
there. I dont know but I'm assuming there is a 64 bit stl available.
Next I'd like to set this up so that my web machine (which is windows) ftp's
the data file over to the linux machine for processing. He then processes
the data file, loads the data into mysql then skims off the topX data pieces
I'm interested in and ftp's that back over to the windows side of things for
storage in sql server and display on the web.
Again sounds pretty clear cut but we'll see.
On Linux how would it detect that there is a new file available for
processing? Is that a cron job which watches the disk and how would it know
the ftp is actually complete and not in process, meaning the data is still
being transfered? The data files themselves are small so it might not be an
issue but given a chance for a problem a process will find it.
Thats about it. I'd just like some thoughts or ideas regarding this plan. My
background is c++ on windows so this will be my first real serious adventure
in to development on linux. When I think c++ on linux I think all command
line so what are the popular graphical environments to program in ?
I'm setting up fedora now on one of my dev machines, dual boot win2000 to
get a start on development, this isn't 64 bit yet.
Thanks alot!!
I would like some comments on my design ideas regarding this process I
intend to create.
I have a c++ program , let calls it NumberCruncher that I wrote on windows.
Its not a windows program but vanilla c++ utilizing stlport containers
(maps, hash table etc). I'm running into issues where the numbers I am
calculating are bigger than the largest data size currently available on
windows thus the idea to port to 64 bit linux.
There are other issues where I run out of memory also but the main problem
is the data size. My plans are to build a 64 bit cpu machine, load 64 bit
linux on it (I'm thinking fedora) and use gcc or g++ (not sure which) to
port my program over.
That sounds pretty clear cut but I realize there will be issues here and
there. I dont know but I'm assuming there is a 64 bit stl available.
Next I'd like to set this up so that my web machine (which is windows) ftp's
the data file over to the linux machine for processing. He then processes
the data file, loads the data into mysql then skims off the topX data pieces
I'm interested in and ftp's that back over to the windows side of things for
storage in sql server and display on the web.
Again sounds pretty clear cut but we'll see.
On Linux how would it detect that there is a new file available for
processing? Is that a cron job which watches the disk and how would it know
the ftp is actually complete and not in process, meaning the data is still
being transfered? The data files themselves are small so it might not be an
issue but given a chance for a problem a process will find it.
Thats about it. I'd just like some thoughts or ideas regarding this plan. My
background is c++ on windows so this will be my first real serious adventure
in to development on linux. When I think c++ on linux I think all command
line so what are the popular graphical environments to program in ?
I'm setting up fedora now on one of my dev machines, dual boot win2000 to
get a start on development, this isn't 64 bit yet.
Thanks alot!!