Many users have their own public ssh keys on github. Is there an easy way to get it knowing someone else's username? I know it's possible - the ubuntu installer gets the keys somehow - but I can't find a way to do it. It would be useful to create account for somebody only asking to github account, not a ssh key.
undefine's questions
There is a created years ago(and many time resized from that times) filesystem with ext4. After power failure it stop to mount. When i try to mount it manually i receive an error:
# mount /dev/space/vservershosting-vs /mnt/
mount: /mnt: mount(2) system call failed: Structure needs cleaning.
In dmesg there is more information:
[32618.800854] EXT4-fs error (device dm-44): __ext4_iget:5080: inode #2: block 1953722220: comm mount: invalid block
[32619.264574] EXT4-fs (dm-44): get root inode failed
[32619.264633] EXT4-fs (dm-44): mount failed
fsck pass without any repairing:
# fsck.ext4 -c -f -v /dev/space/vservershosting-vs
e2fsck 1.44.5 (15-Dec-2018)
Checking for bad blocks (read-only test): done
/dev/space/vservershosting-vs: Updating bad block inode.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/space/vservershosting-vs: ***** FILE SYSTEM WAS MODIFIED *****
1551308 inodes used (7.89%, out of 19660800)
115052 non-contiguous files (7.4%)
861 non-contiguous directories (0.1%)
# of inodes with ind/dind/tind blocks: 48107/1394/6
Extent depth histogram: 988992/12798/13
67929516 blocks used (86.38%, out of 78643200)
0 bad blocks
16 large files
1377888 regular files
143712 directories
74 character device files
25 block device files
10 fifos
810 links
29496 symbolic links (29045 fast symbolic links)
94 sockets
------------
1552109 files
(the same output with current version of fsck)
When i try to mount - it fail. After that - fsck "fix" fs(second run fsck don't do anything). But fs is still not fixed - when i try to mount it it fail.
root@undefine-ThinkPad-T470p:~# fsck.ext4 -v /dev/sdb1
e2fsck 1.45.5 (07-Jan-2020)
/dev/sdb1 zawiera system plików z błędami, wymuszono sprawdzenie.
Przebieg 1: Sprawdzanie i-węzłów, bloków i rozmiarów
Przebieg 2: Sprawdzanie struktury katalogów
Przebieg 3: Sprawdzanie łączności katalogów
Przebieg 4: Sprawdzanie liczników odwołań
Przebieg 5: Sprawdzanie sumarycznych informacji o grupach
1551308 użytych i-węzłów (7.89% z 19660800)
115052 pliki nieciągłe (7.4%)
861 katalogów nieciągłych (0.1%)
liczba i-węzłów z blokami ind/dind/tind: 48107/1394/6
Histogram głębokości ekstentów: 988992/12798/13
67929516 użytych bloków (86.38% z 78643200)
0 wadliwych bloków
16 dużych plików
1377888 zwykłych plików
143712 katalogów
74 pliki urządzeń znakowych
25 plików urządzeń blokowych
10 potoków
810 dowiązań
29496 dowiązań symbolicznych (29045 szybkich dowiązań symbolicznych)
94 gniazda
------------
1552109 plików
root@undefine-ThinkPad-T470p:~# fsck.ext4 -v /dev/sdb1
e2fsck 1.45.5 (07-Jan-2020)
/dev/sdb1: czysty, 1551308/19660800 plików, 67929516/78643200 bloków
root@undefine-ThinkPad-T470p:~# mount /dev/sdb1 /mnt/test/
mount: /mnt/test: wywołanie systemowe mount(2) nie powiodło się: Struktura wymaga wyczyszczenia.
I've tried to mount using alternative superblocks - the same error/result.
but- when i connect to volume using debugfs i see all important content (3 directories) in lost+found.
I can restore all content using debugfs and rdump command, but - how to mount/fix that volume?
I try to give specific user(for example "test") right to read any new created directory. I do that using:
undefine@undefine-ThinkPad-T430s:~/test$ getfacl .
# file: .
# owner: undefine
# group: undefine
user::rwx
group::rwx
other::r-x
undefine@undefine-ThinkPad-T430s:~/test$ setfacl -d -m u:test:rX .
undefine@undefine-ThinkPad-T430s:~/test$ getfacl .
# file: .
# owner: undefine
# group: undefine
user::rwx
group::rwx
other::r-x
default:user::rwx
default:user:test:r-x
default:group::rwx
default:mask::rwx
default:other::r-x
Then - when i create a new directory using mkdir command - it works fine:
undefine@undefine-ThinkPad-T430s:~/test$ mkdir testa
undefine@undefine-ThinkPad-T430s:~/test$ getfacl testa
# file: testa
# owner: undefine
# group: undefine
user::rwx
user:test:r-x
group::rwx
mask::rwx
other::r-x
default:user::rwx
default:user:test:r-x
default:group::rwx
default:mask::rwx
default:other::r-x
But - if i create a new directory forcing mode - effective rights are empty:
undefine@undefine-ThinkPad-T430s:~/test$ mkdir -m 700 testb
undefine@undefine-ThinkPad-T430s:~/test$ getfacl testb
# file: testb
# owner: undefine
# group: undefine
user::rwx
user:test:r-x #effective:---
group::rwx #effective:---
mask::---
other::---
default:user::rwx
default:user:test:r-x
default:group::rwx
default:mask::rwx
default:other::r-x
And test user can't read files within directory.
Is there any way to avoid that and give a "test" user right to read directory content regardless mode using when directory is created? I can workaround that using incron job which "fix" permissions after directory is created - but it's dirty hack and i would like to do that "right way"
Real problem i've occured in docker system, where dockerd creates itself directories within /var/lib/docker/containers directory with 0700 mode.
There is a jenkins pipeline job ("parent"). From it - on one stage there is called another pipeline job ("child" - using build job command).
Is there any way to return something (for example short text) from child to parent job without using external services like artificatory, and don't assuming that parent and child jobs are on the same machine?
There is a server with few disks. And - a new server with PERC controller.
I would like to migrate existing data into a new server to raid6 which uses 6 disks(4+2).
Unfortunly - i don't have enought free disks to create a "target" raid6 array. I would like to create a degraded raid6 array using 4 disks (which will work like a 4 disks raid0), and then after migrating data - add 2 last disks from old server into it and rebuild array.
Is that possible using megacli? I tried it using -Force option to -CfgLdAdd and pointing missing slots - but it didn't work. Is there any other way to do that job?
There are 3 servers R430 (S1, S2, S3). All of them have 2 procesors and 8*8GB of ram in slots A1-A4 and B1-B4.
There is a need of extending memory.
I've read dell recomendations (http://www.dell.com/support/article/us/en/19/SLN296980#SampleConfig) and i see 2 possibilites:
- buy 8*16GB ram, put it to S1, and memory from S1 put into S2 and S3.
- buy 3*8GB ram, and add it to S1..S3.
In first case - there will be 128GB ram in S1, and 96GB ram in S2 and S3. In second - there will be on every machine 96GB ram.
Does anybody knows what difference of speed can i expect in 1 and 2 scenario in comparsion to current situation? I expect that situation where memory is not equally mounted between processors will reduce performance. But - how much drop of performance can i expect in both cases? 1%? 10%? Where will be smaller?
I have some virtual machines with software, which licensing check depends on cpu string.
They're running on kvm - with default cpu string value - which is QEMU Virtual CPU version 1.7.1
After upgrading host server to newer version of debian - cpu string changed to QEMU Virtual CPU version 2.0.0
- what break license check.
Is any way (excluding recompiling/downgrading qemu/kvm) to force for specific virtual machine using cpu string QEMU Virtual CPU version 1.7.1
?
I would like to give for specific user allowing to put tabulator as parameter to program. Full invoking looks like:
sudo /sbin/vgs --units b --nosuffix --noheadings --separator 'TAB'
I try to put it like:
user ALL=(ALL) NOPASSWD: /sbin/vgs --units b --nosuffix --noheading --separator 'TAB'
(TAB is of course tabulator character). Unfortunly - it doesn't work - sudo ask for password and doesn't recognize command. When i ommit 'TAB' section - it works fine. Problem is both with TAB character, and with '. How to avoid it and allow puting tabulator as parameter?
There is a ganeti cluster. Is there any way to run script on ganeti node after some instance is start running on that node? In both situations - starting from scratch, and migrating live from another host.
It's possible to do it "automated" way(excluding modifying sources)? Only idea which i have is to invoke a trigger from instance to node to run script - but i know that it's not good way.
Are there any scripts running for instances in node context?