I have installed new Seagate hard disk to my old system and installed Ubuntu 14.04. I checked disk status using gksudo gnome-disks
and resulted in Disk is OK, one attribute failed in the past
. I did not get any proper links for this message.
Can someone tell what does it mean ?
Naive's questions
I'm trying to do 'upgrade' and getting following error.
$sudo apt upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following package was automatically installed and is no longer required:
notepadqq-common
Use 'apt-get autoremove' to remove it.
The following packages will be upgraded:
fonts-opensymbol
1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/110 kB of archives.
After this operation, 1,024 B of additional disk space will be used.
Do you want to continue? [Y/n] Y
E: Invalid archive signature
E: Internal error, could not locate member control.tar.{gzbz2xzlzma}
E: Prior errors apply to /var/cache/apt/archives/fonts-opensymbol_2%3a102.6+LibO4.2.8-0ubuntu5.2_all.deb
debconf: apt-extracttemplates failed: No such file or directory
dpkg-deb: error: `/var/cache/apt/archives/fonts-opensymbol_2%3a102.6+LibO4.2.8-0ubuntu5.2_all.deb' is not a debian format archive
dpkg: error processing archive /var/cache/apt/archives/fonts-opensymbol_2%3a102.6+LibO4.2.8-0ubuntu5.2_all.deb (--unpack):
subprocess dpkg-deb --control returned error exit status 2
E: Sub-process /usr/bin/dpkg returned an error code (1)
I'm using Ubuntu 12.04 and trying to expand size allocated to Linux virtual machine, but I'm not able to find vmware-vdiskmanager
or any package that installs vmware-vdiskmanager
. Can anyone please tell me how to install vmware-vdiskmanager
or link to download it...?
I'm trying pass year variable to re.findall, but it doesn't work(no errors). Can anyone specify where I'm going wrong?
year=2013
links = re.findall(r"('+year+'-.*\/t.*.html)", line)
Following code is to extract /support/security/*.html links from a file(urlfile contain about 1000 links) to urlsort file using regex,But i'm weak in regex can anyone show me how to do that...?
#!/usr/bin/env python
import re,sys
fileHandle = open('urlfile', 'r')
f1 = open('urlsort', 'w')
for line in fileHandle.readlines():
links = re.findall(r"(\/support\/security\/*.html.*?)", line)
for link in links:
sys.stdout = f1
print ('%s' % (link[0]))
sys.stdout = sys.__stdout__
f1.close()
fileHandle.close()
I am trying download a page using wget by passing variable which holds url using python,but it didn't work.
url=http://www.example.com/support/security/
os.system("wget -P download url")
can anyone specify what is wrong with this...?
I would like a brief explanation of the following command line:
grep -i 'abc' content 2>/dev/null
I want to write a python program to download the contents of a web page, and then download the contents of the web pages that the first page links to.
For example, this is main web page http://www.adobe.com/support/security/, and the pages I want to download: http://www.adobe.com/support/security/bulletins/apsb13-23.html and http://www.adobe.com/support/security/bulletins/apsb13-22.html
There is a certain condition I want to meet: it should download only web pages under bulletins not under advisories(http://www.adobe.com/support/security/advisories/apsa13-02.html)
#!/usr/bin/env python
import urllib
import re
import sys
page = urllib.urlopen("http://www.adobe.com/support/security/")
page = page.read()
fileHandle = open('content', 'w')
links = re.findall(r"<a.*?\s*href=\"(.*?)\".*?>(.*?)</a>", page)
for link in links:
sys.stdout = fileHandle
print ('%s' % (link[0]))
sys.stdout = sys.__stdout__
fileHandle.close()
os.system("grep -i '\/support\/security\/bulletins\/' content >> content1")
I've already extracted the link of bulletins into a content1, but don't know how to download the content of those web pages, by providing content1 as input.
The content1 file is as shown below:- /support/security/bulletins/apsb13-23.html /support/security/bulletins/apsb13-23.html /support/security/bulletins/apsb13-22.html /support/security/bulletins/apsb13-22.html /support/security/bulletins/apsb13-21.html /support/security/bulletins/apsb13-21.html /support/security/bulletins/apsb13-22.html /support/security/bulletins/apsb13-22.html /support/security/bulletins/apsb13-15.html /support/security/bulletins/apsb13-15.html /support/security/bulletins/apsb13-07.html
root@user-desktop:/etc# sudo /usr/sbin/service vsftpd restart
restart: Unknown instance:
Guys i'm using vsftpd server and logged in using my local user account. But i want to log out from the ftp server and there is no option of log out there. Can anyone help me....?
I'm trying to install 'requests module' using easy_install but I'm getting the following error:
$ sudo easy_install requests
Processing requests
error: Not a recognized archive type: requests
If I try with pip, I get the following error:
$ pip install requests
Unknown or unsupported command 'install'
While running the simple Python program below, I'm getting the following error:
./url_test.py: line 2: syntax error near unexpected token `('
./url_test.py: line 2: `response = urllib2.urlopen('http://python.org/')'
import urllib2
response = urllib2.urlopen('http://python.org/')
print "Response:", response
# Get the URL. This gets the real URL.
print "The URL is: ", response.geturl()
# Getting the code
print "This gets the code: ", response.code
# Get the Headers.
# This returns a dictionary-like object that describes the page fetched,
# particularly the headers sent by the server
print "The Headers are: ", response.info()
# Get the date part of the header
print "The Date is: ", response.info()['date']
# Get the server part of the header
print "The Server is: ", response.info()['server']
# Get all data
html = response.read()
print "Get all data: ", html
# Get only the length
print "Get the length :", len(html)
# Showing that the file object is iterable
for line in response:
print line.rstrip()
# Note that the rstrip strips the trailing newlines and carriage returns before
# printing the output.
#!/usr/bin/env python
import httplib
import sys
#get http server ip
http_server = sys.argv[0]
#create a connection
conn = httplib.HTTPConnection(http_server)
while 1:
cmd = raw_input('input command (ex. GET index.html): ')
cmd = cmd.split()
if cmd[0] == 'exit': #type exit to end it
break
#request command to server
conn.request(cmd[0],cmd[1])
#get response from server
rsp = conn.getresponse()
#print server response and data
print(rsp.status, rsp.reason)
data_received = rsp.read()
print(data_received)
conn.close()
Error
Traceback (most recent call last):
File "./client1.py", line 19, in <module>
conn.request(cmd[0],cmd[1])
IndexError: list index out of range
can any one tell me why that error is coming and can anyone modify the code. it is a client side code to connect with server
my input is :GET index.html
But now my error is
File "./client1.py", line 19, in <module>
conn.request(cmd[0],cmd[1])
File "/usr/lib/python2.6/httplib.py", line 910, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python2.6/httplib.py", line 947, in _send_request
self.endheaders()
File "/usr/lib/python2.6/httplib.py", line 904, in endheaders
self._send_output()
File "/usr/lib/python2.6/httplib.py", line 776, in _send_output
self.send(msg)
File "/usr/lib/python2.6/httplib.py", line 735, in send
self.connect()
File "/usr/lib/python2.6/httplib.py", line 716, in connect
self.timeout)
File "/usr/lib/python2.6/socket.py", line 500, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
socket.gaierror: [Errno -2] Name or service not known