In my organization, we deploy all our software to our production machines using RPM. Our build process (which is automated) involves checking out the source from version control, tarring that source directory up, then running rpmbuild using that source tarball. rpmbuild only uses that tarball to untar the sources to work on them. So, it seem like the whole taring and untaring business is just an unneeded extra step in the build process. Is there was a way to just specify the source directory in the spec file, avoiding those extra steps?
Eddie's questions
At my organization we have a processing and storage system spread across two dozen linux machines that handles over a petabyte of data. The system right now is very ad-hoc; processing automation and data management is handled by a collection of large perl programs on independent machines. I am looking at distributed processing and storage systems to make it easier to maintain, evenly distribute load and data with replication, and grow in disk space and compute power.
The system needs to be able to handle millions of files, varying in size between 50 megabytes to 50 gigabytes. Once created, the files will not be appended to, only replaced completely if need be. The files need to be accessible via HTTP for customer download.
Right now, processing is automated by perl scripts (that I have complete control over) which call a series of other programs (that I don't have control over because they are closed source) that essentially transforms one data set into another. No data mining happening here.
Here is a quick list of things I am looking for:
Reliability: These data must be accessible over HTTP about 99% of the time so I need something that does data replication across the cluster.
Scalability: I want to be able to add more processing power and storage easily and rebalance the data on across the cluster.
Distributed processing: Easy and automatic job scheduling and load balancing that fits with processing workflow I briefly described above.
Data location awareness: Not strictly required but desirable. Since data and processing will be on the same set of nodes I would like the job scheduler to schedule jobs on or close to the node that the data is actually on to cut down on network traffic.
Here is what I've looked at so far:
Storage Management:
GlusterFS: Looks really nice and easy to use but doesn't seem to have a way to figure out what node(s) a file actually resides on to supply as a hint to the job scheduler.
GPFS: Seems like the gold standard of clustered filesystems. Meets most of my requirements except, like glusterfs, data location awareness.
Ceph: Seems way to immature right now.
Distributed processing:
- Sun Grid Engine: I have a lot of experience with this and it's relatively easy to use (once it is configured properly that is). But Oracle got its icy grip around it and it no longer seems very desirable.
Both:
Hadoop/HDFS: At first glance it looked like hadoop was perfect for my situation. Distributed storage and job scheduling and it was the only thing I found that would give me the data location awareness that I wanted. But I don't like the namename being a single point of failure. Also, I'm not really sure if the MapReduce paradigm fits the type of processing workflow that I have. It seems like you need to write all your software specifically for MapReduce instead of just using Hadoop as a generic job scheduler.
OpenStack: I've done some reading on this but I'm having trouble deciding if it fits well with my problem or not.
Does anyone have opinions or recommendations for technologies that would fit my problem well? Any suggestions or advise would be greatly appreciated.
Thanks!