jSCSI, soon a part of the Hadoop Stack
jSCSI, still actively used
It was quite calm around my open source projects. Work keeps me busy as well as pushing this blog back to life. While just making the last modifications of this blog, an email from the Hadoop Ecosystem arrived: They plan to extend HDFS to support block storage. One of the only plain-java implementations to do so is jSCSI. As a consequence, jSCSI becomes a part of Hadoop.
How is jSCSI used in the Hadoop stack ?
I just refer to the summary in Hadoop:
- https://issues.apache.org/jira/browse/HDFS-11118
- https://issues.apache.org/jira/secure/attachment/12837867/cblock-proposal.pdf
The implementation if cBlocks is done and they are rigorously testing the jSCSI-inclusion at the moment with volumes up to 8TB sizes and benchmarks like iozone and filebench.
How is jSCSI currently released ?
I am now in need to release jSCSI upon request. The first need came up just after Christmas. Reconstructing a manual release was not as easy. I realized, that the pom was bad structured and the usage of profiles should be improved. Second, I just got lost in the concrete commands, maven-based signing of artifacts, missing credentials of OSSRH and so on. It has been a while since jSCSI received an update…
Outcomes:
- Credentials are retrieved
- maven gpg in version 1.4 takes a password, even if the privatekey is not encrypted and says nothing at all related to wrong credentials.
- The following commands release jSCSI at the moment
- Clean up profiles and modules to release first