I have a file named like file.bin.gz
.
I tried using gzip -d f.bin.gz
to uncompress it, and got a .bin
file.
Is the correct way to uncompress .bin
files to use gzip -d
? Also, .bin
is a binary file, right?
I have a file named like file.bin.gz
.
I tried using gzip -d f.bin.gz
to uncompress it, and got a .bin
file.
Is the correct way to uncompress .bin
files to use gzip -d
? Also, .bin
is a binary file, right?
I have tried both gzip
and gunzip
commands but I get either
gunzip *.gz
gzip: invalid option -- 'Y'
gunzip -S-1800-01-01-000000-g01.h5.gz
gzip: compressed data not read
from a terminal. Use -f to force decompression. For help, type: gzip -h
If I try the -f
option it takes a very long time to work on one single file and the command is not executed successfully. Am I missing something?
I'm trying to gzip all files on ubuntu that have the file extension .css, .html or .js. in a top directory and all subdirectories. I want to keep the original files and overwrite the .gz file, if already existing.
So when I have n files, I want to keep these n files and create additional n archive files. Not just one.
My try was to run a script that looks like this:
gzip -rkf *.css
gzip -rkf *.html
... one line for each file extension
First: I need to have one line in that script for each file extension I want to gzip. That's ok, but I hope to find a better way
Second and more important: It does not work. Although -r should do the job, the subdirectories are unchanged. The gzip file is only created in the top directory.
What am I missing here?
Btw: The following is a bug in the verbose output, right? When using -k and -v option
-k, --keep keep (don't delete) input files
-v, --verbose verbose mode
The verbose output says it replaces the file, although "replace" means that the original file does not exist after the replace. Anyway, THis is only the output thing.
$ ls
index.html subdir1 testfile testfile.css.gz
javaclass.java subdir2 testfile.css
$ gzip -fkv *.css
testfile.css: 6.6% -- replaced with testfile.css.gz
$ ls
index.html subdir1 testfile testfile.css.gz
javaclass.java subdir2 testfile.css
I have been using the GUI (right click => compress) to try and compress a .tar containing 3 videos totalling 1.7gb (.H264 MP4s). gzip, lrzip, 7z etc. all do nothing to the file size and the compressed folder is also 1.7 gb.
I then tried running lrzip from the command line (in case it was a gui problem), and used the -z flag (extreme compression), and this was my output.
As the compression ratio shows, the actual size of the compressed folder is bigger than the original! I don't know why I am having no luck, lrzip in particular should be effective according to random reviews I have read and the official docs (files larger than 100mb, the larger the better) - see https://wiki.archlinux.org/index.php/Lrzip
Why can't I compress my files?
On Fedora/Redhat/CentOS the less
command seems to magically detect a gzipped file and decompress it on the fly, so you can do:
less my_stuff.csv.gz
I've just noticed this doesn't work on Ubuntu 11
less my_stuff.csv.gz
"my_stuff.csv.gz" may be a binary file. See it anyway?
I've been examining my CentOS VMs to see if there's some shell alias magic that makes it work but there doesn't seem to be. Is gzip support just built in to the CentOS binary?
If anyone knows how this works on CentOS and/or how it can be made to work on Ubuntu I'd be grateful.
I'm aware I can do
zcat my_stuff.csv.gz | less
but that would make my keyboard wear out more quickly.