I am using an application load balancer, under this ALB, a target group is provided. In this target group, two fargate ECS instances are running. These two instances use the same PHP docker image. When i upload an csv file, the tasks in the csv file will be moved to the SQS. Here the tasks are not passing to the SQS and no error messages were showing.So i changed the ECS instance number to 1 ( initially it was 2) then SQS is working fine. So how do i resolve this issue for multiple ECS containers.
aks's questions
persistent volume claim and persistent volume yaml file
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/datatypo"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-claim
spec:
storageClassName: manual
volumeName: my-volume
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
Deployment yaml file
apiVersion: v1
kind: Service
metadata:
name: typo3
labels:
app: typo3
spec:
type: NodePort
ports:
- nodePort: 31021
port: 80
targetPort: 80
selector:
app: typo3
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: typo3
spec:
selector:
matchLabels:
app: typo3
replicas: 1
template:
metadata:
labels:
app: typo3
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: app
operator: In
values:
- typo3
containers:
- image: image:typo3
name: typo3
imagePullPolicy: Never
ports:
- containerPort: 80
volumeMounts:
- name: my-volume
mountPath: /var/www/html/
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: my-claim
Note: if the persistent volume is not added, then contents were showing inside the pod (in var/www/html
). But after adding the persistent volume then it's not showing any contents inside the same folder and the external mount path /mnt/datatypo
.
I am a newbie in Nginx. I want to fetch a particular value "en" from this form data in Nginx ($request_body) "csrfmiddlewaretoken=7cV8XPsznWBtKRDw2surXEYoUU6f4Bow4JHwdzrVWOEle0J1rw35PIYmqAVBIk52&next=/&language=en".
Can this be done in Nginx.? If so, how can I write a rule in Nginx for the same one? Please support me.
I have deployed a simple Django application in the AWS server and created a config file in the Nginx as follows. But its static files are not detecting. Location of my static folder location: /path/static.
This application checks for static files by the URL HTTP://public_ip/static, but I need to achieve the same in HTTP://public_ip/portal this URL
server {
listen 80;
server_name 127.0.0.1;
location /portal {
include proxy_params;
proxy_pass http://localhost:8000/en;
}
}
I have deployed a simple Django application in the AWS server and created a config file in the Nginx as follows.
server {
listen 80;
server_name 127.0.0.1;
location /portal {
include proxy_params;
proxy_pass http://localhost:8000;
}
}
But it is not working and showing as "404 not found".
Django application alone working in a URL as http://public_ip/en/ but I need to serve this application in http://public_ip/portal.
I have deployed my front end application in 7000 ports. So I need to write a rule in Nginx so that whenever all the HTTP request from 7000 ( HTTP://example.com:7000) port will automatically redirect HTTPS in the same port(HTTPS://example.com:7000)
Please support me to solve the issue. This is my current Nginx configuration file
server {
listen 7000 ssl;
ssl_certificate /new_keys/new_k/ssl_certificate/star_file.crt;
ssl_certificate_key /new_keys/new_k/ssl_certificate/private.key;
root /home_directory;
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
try_files $uri $uri/ =404;
}
error_page 404 /custom_404.html;
location = /custom_404.html {
root /usr/share/nginx/html;
internal;
}
}
Note :
- Application URL that serves in 7000 port is "http://example.com:7000/#/
- Port 80 was already taken for another application
- Currently I have a wild card SSL certificate
- The server IP was pointed only to a single domain only
We were trying to install "Big blue button" in an Ubuntu server with a firewall. For installation, we need these UDP ports ("16384-32768") should be opened in the firewall. But we don't know whether the udp ports are opened or not in the firewall. When we tried the "netcat" service, we were able to communicate. But while doing "nmap" scanning, it has been showing these UDP ports in the state as "closed"
nmap command used as :
nmap -sS -sU -PN -p 16384 EXTERNAL_IP ( For single port )
Output :
Host is up (0.039s latency).
PORT STATE SERVICE
16384/tcp filtered connected
16384/udp closed connected