So i want t import an application load balancer under the terraform management, i managed to add some of it's attributes like cross-region and delete protection, global accelerator etc, and i imported, but i found out that i forgot to add the "config" attribute, how do i include that in the import (which i already did) ? terraform says that doing multiple imports will result in unknown behavior, also, if i continue without adding the "config", will doing apply result in removing that config ? Another thing, i have multiple rules under the loadbalancer, do i have to import all of them in order to add another rule with a specefic priority ? Thank you,
logax's questions
We recently had a problem with our turnserver (coturn), some clients who use firewalls to block all outgoing udp connections have problems connecting to it, as far as i know, when udp connection doesn't work, it should fallback to using tcp right ? but why is this not happening ? we tried opening all traffic in our server to test, but the same issue is still present, how to deal with a situation like this ? from someone who is not very familiar with websockets and turnservers.
We have our turnserver behind a network load balancer (aws) , same issue happen when it is not behind a loadbalaner. This is my config, using ubuntu 16 :
server-name=example.com
cert=/etc/letsencrypt/live/example.com/cert.pem
pkey=/etc/letsencrypt/live/example.com/privkey.pem
realm=example.com
fingerprint
listening-ip=0.0.0.0
external-ip=*.*.*.*/10.0.1.95 #or just the external ip
listening-port=443
min-port=10000
max-port=60000
log-file=/var/log/turnserver.log
verbose
user=user:password
lt-cred-mech
I started having a general protection in the last days which results in my container restarting, thus, disrupting the service, whenever the general protection error happens, i can see in the cloudwatch monitoring charts, that there some sudden small spikes that happened minutes before, i don't know if it is related or not, but wanted to mention it.
This is the error :
kernel: [10083817.146880] traps: node[17179] general protection ip:7f918c073529 sp:7ffcb10f0430 error:0 in libc-2.24.so[7f918c03f000+195000]
I'm using ubuntu 16 and Docker version 18.09.7, build 2d0083d
We have installed our own postfix/virtualmin server and we have a laravel application,the problem is when we use external smtp servers, it is not problem to use tls option, and the emails are getting sent ok, but when using our new smtp server, when using tls we have this error :
stream_socket_enable_crypto(): SSL operation failed with code 1. OpenSSL Error messages: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed
It has to be something regarding missing certificates in our postfix/virtualmin server, but, i do not know where to start, we have already a letsencrypt certificates for it, but i believe we maybe need to convert them to ca or something like that ? i'm not sure, that's why i need your help.
Thank you,
Just as the title says, we have a website that uses third party smtp credentials to send emails, but, we keep getting our smtp credentials hacked and used to send spams emails, which results in our smtp account suspension, we first used ses, and then figured that we need to add spf,dkim and dmarc, after we added them, we moved to sendgrid, got hacked again, teammates think it is because of weak password to access sendgrid, but i do not think so, because password test says it is strong and requires 2 thousand years to crack, we do not really know the problem, we are using laravel 7 for our website, how is the hacker able to access .env file ?
Help, please.
i'm looking to download recusively my wordpress website into static using wget, the problem is that whenever i do that, it's using too much bandwith (3.5gb) even though i end up downloading 20mb which weird, so i'm looking to download using localhost, but when i use wget with localhost i only get the index page, now, we all know that wordpress saves the website url into database, so how am i supposed to download using localhost, i already set it up in apache configuration, i just want to download without using so many bandwith.
Tried using -N option to reduce bandwith but i keep getting error that files don't have last-modified header, so it is not helping..
This is the command i'm using :
wget -N --recursive --no-clobber --page-requisites --html-extension --convert-links --restrict-file-names=windows --domains website website -P /opt/
Thank you,
UPDATE 1 : Used /etc/hosts and linked the website to the localhost ip 127.0.0.1, but still it redirects back to the original ip and even then only downloads the index.page.
Is there a way to tell server to force add last-modified header to all wordpress files ?
i'm trying to launch a nodejs container using AWS FARGATE, the problem i'm facing is that fargate gives me this error :
cannot find this module "/path/to/file/webrtc.js"
And when i execute npm install from the command section when launching the container it gives me
npm WARN enoent ENOENT: no such file or directory, open '/usr/src/app/package.json'
npm WARN saveError EACCES: permission denied, open '/usr/src/app/package-lock.json.12345678'
npm WARN saveError ENOENT: no such file or directory, open '/usr/src/app/package.json'
How to give permission ? I tried changing the json file and swapping user = null with user = root, but same error appears.
thank you,
I have just setup a coturn server, it works perfectly fine when using the ip or teh domain without loadbalancer, it was tested using this online tool :
https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/
The problem is when i use a network loadbalancer, rerouting tcp_udp works on port 80, but when trying to use tls for port 443, it doesn't work.
I configured the Network load balancer to route tls traffic for port 443 to the target group under port 443 also. I'm using letsencrypt certificate for domain.com and *.domain.com from letsencrypt in my network load balancer. Same certificates are added in the config file the turnserver.conf.
And this is my config :
external-ip=1.2.3.4
listening-port=80
min-port=10000
max-port=20000
log-file=/var/log/turnserver.log
verbose
tls-listening-port=443
lt-cred-mech
server-name=domain.com realm=domain.com
user=tester:12345678
total-quota=100
stale-nonce=600
cert=/opt/coturn/fullchain.pem
pkey=/opt/coturn/privkey.pem
cipher-list="ECDH+AESGCM:ECDH+CHACHA20:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS"
log-file=/var/log/coturn.log
# Specify the process user and group
proc-user=turnserver
proc-group=turnserver
And this is what i get from the log :
3170: IPv4. tcp or tls connected to: 9.8.7.6:34274
3170: session 001000000000003730: TCP socket closed remotely 9.8.7.6:34274
3170: session 001000000000003730: closed (2nd stage), user <> realm <domain.com> origin <>, local 0.0.0.0:443, remote 9.8.7.6:34274, reason: TCP connection closed by client (callback)
And btw, I always get 701 error from the online tool.
Thank you,
I want to find a way to capture a client browser console that is using my server, and save it to a file inside the server, is this possible using a shell script ?
I can't seem to find a documentation that will help me download a zip file from aws s3 to an instance using terraform, can someone help me find a solution to this ?
Thank you.
I'm trying to learn Ansible. I'm working on creating an instance and uploading a file into it, the file i want to put into the ec2 instance is in stored in S3, but it keeps saying that the destination inside the c2 doesn't exist, it does exist though.
This is which is failing, everything else before that including the creation of instance is working fine :
- name: Deploy war file
aws_s3:
bucket: "{{ war_bucket }}"
object: "{{ war_file }}"
dest: "{{ war_deploy_path }}/{{ war_file }}"
mode: get
overwrite: no
register: war_downloaded
And this is how i declared my variables :
war_file: file.war
war_bucket: ansible-bucket
war_deploy_path: /opt/folder/file.war
And this is the error I get :
[Errno 2] No such file or directory: '/opt/folder/file.war.1f1ccA91'
Why is it adding this weird code "1f1cA91" ? Is it causing the problem?
Update : I tried changing the destination from "{{ war_deploy_path }}/{{ war_file }}" to "{{ war_deploy_path }}" but the same problem persists, only the error is [Errno 2] No such file or directory: '/opt/folder.Ac2926c3'
now.
Important Update 2 : Ok, so for the sake of testing, i decided to create the same path in my local machine, and to my surprise, this script is actually running it on my local machine instead of the ec2 instance lol, so now, how do i make it run on the ec2 instance xD.
I'm looking for a way to both receive emails and then store them in s3 at the same time while using ses, i found out that it can't be done unless i do some way of forwarding, so, what i did was that i created an additional subdomain "ses.example.com" and Linked it in DNS (MX) to inbound-smtp.us-east-1.amazonaws.com, and then created an email adress called [email protected] and configured it under ses to store s3 (using rule sets), in addition to this, i used "always_bcc" configuration to forward all mails to the address [email protected], by doing so, i managed to receive and store only locally sent mails, mails coming from outside will not be stored, i think that is happening because when the mails are forwarded, the "from" header stays the same, but, i'm not really sure, is there a way around this ?
I just need to know if there is a way to use ses to both store emails in S3 and receive them to my mailserver at the same time.
tell me if you need more information please.
I'm using Zimbra mail server with postfix.
This is how my dns looks like :
Example.com MX 10 mail.example.com. ses.example.com 10 MX inbound-smtp.us-east-1.amazonaws.com
I'm looking for a way to forward all incoming mails to an external mail using postfix/zimbra, i tried "always bcc" but it forwards all mails including outgoing mails, but i'm looking for only incoming mails.
Thank you,
I'm trying to store emails i receive into an s3 bucket, i followed this tutorial and multiple others : https://docs.aws.amazon.com/ses/latest/DeveloperGuide/receiving-email-getting-started.html
My MX record is set to my mail.Domain like this : Domain MX 10 mail.domain When i change it to Domain MX 10 inbound-smtp.us-east-1.amazonaws.com
I do not receive mails anymore and still do not get emails stored.
I do not know what is missing exactly ? someone help please.
Update : Managed to follow Mlu answer and i'm now at a very close step to getting my answer, the only problem is that AWS SES does not accept a "FROM" that is outside of my domain reaching another outside domain.
For example A sends email to B, B forwards (looks more like redirects) email to C, so C sees that he got a message from A not B, that, AWS SES doesn't like and will give this error for example :
554 Message rejected: Email address is not verified. The following identities failed the check in region US-EAST-1: [email protected], Jon Doe (in reply to end of DATA command).
I was tasked to set up an ssl in a server, this server uses wildfly, so I have to make a keystore that contains all of the certificates that I got, the server certificate, the intermediate and the keyfile.
First I chained up the server cert and the intermediate cert, and then I used openssl to create a pkc12 file. I then used keytool to create a keystore from that pkc12 file.
The problem is when I open the keystore file or the pkc12 file, I find that it doesn't contain the intermediate cert, it only has the server cert. I've done this procedure before and it worked
Anyone knows what can the problem be ?
Extra info : the intermediate certificate is a little old (from 2010) and uses sha1 and will expire in 9 months which is weird, unlike my server cert which is new and uses sha256.