Have task on regarding agent nodes
✓Script will run on certain nodes not all nodes
✓Will get mail when node is down
✓Print node name and ip address
Could anyone please suggest how to achieve this.
Have task on regarding agent nodes
✓Script will run on certain nodes not all nodes
✓Will get mail when node is down
✓Print node name and ip address
Could anyone please suggest how to achieve this.
I have created an AMAZON RDS mariadb database under free tier.
I used a random generated password first but it is not connecting.
So, I click modify and updated the database with new password.
but still, I am getting error as below.
(conn=36) Access denied for user 'root'@'175.101.107.213' (using password: YES)
Current charset is windows-1252. If password has been set using other charset, consider using option 'passwordCharacterEncoding'
But I updated the password multiple times but same issue everytime.
Please suggest how to fix that. I checked the passwordCharacterEncoding option in the dbeaver, it is having an empty value.
I have created aks cluster with azure network type selected.
Terraform files for reference
My VNet used with aks.
But this vnet is not available in the dropdown of azure postgres flexible server creation.
I have created postgres with private vnet option. So, no direct access to posgres from internet.
But, I can't access it from "CloudShell" also.
Also, while creating database,when I want to choose existing VNets, aks cluster(VNet) is not showing in the drop down.
How to keep AKS(azure kubernetes service) cluster and postgres flexible server in the same network then?
Even I tried VNet linking for time being, but not working the connectivity there also.
I ran pip freeze command with my jenkins job and below is the output
pip freeze
fpdf==1.7.2
textfile==0.1.4
pip install textfile
Requirement already satisfied: textfile in c:\python39\lib\site-packages (0.1.4)
But when I ran the python script as a job, getting error as below.
$ python C:\Users\ADMINI~1\AppData\Local\Temp\jenkins2938633000292670144.py
Traceback (most recent call last):
File "C:\Users\ADMINI~1\AppData\Local\Temp\jenkins2938633000292670144.py", line 1, in <module>
import textile
ModuleNotFoundError: No module named 'textile'
Build step 'Execute Python script' marked build as failure
No emails were triggered.
Finished: FAILURE
Let's say I have branch fetaure1, where there is pipeline file.
The trigger will be like this.
trigger:
- feature1
For development purpose, I created a new branch from it say (feature1_deveoper1)
But, even though this new branch has this pipeline file, need to modify this again to get trigger working from it.
trigger:
- feature1
- feature1_developer1
So, after all my work, let's say I want to merge to the feature1 branch, again I need to remove this new feature entry and merge it to the branch.
Any better approach for this situation?
In github actions, we can set this using
- name: Build with Maven
working-directory: ./VaultService
run: mvn clean package --file pom.xml
env:
CI: false
But there is no working-directory option in azure devops.
Even I tried below one, but it is not building in the VaulService folder.
- task: Maven@3
inputs:
mavenPomFile: 'pom.xml'
goals: 'clean package'
options: '-DbuildDirectory=VaultService'
publishJUnitResults: false
javaHomeOption: 'JDKVersion'
mavenVersionOption: 'Default'
mavenAuthenticateFeed: false
effectivePomSkip: false
sonarQubeRunAnalysis: false
I kept trigger to cloud_singlesignon branch.
But my pipeline saved to azure-pipelines and the default branch to the repo is master.
But the checkout not happening from cloud_singlesignon.
I observed, it is checking out from branch where the pipeline is saved(azure-pipelines), not the one in trigger
Any idea how to troubleshoot this?
my pipeline:
trigger:
- cloud_singlesignon
resources:
- repo: self
pool:
vmImage: ubuntu-latest
steps:
- script: echo Hello, world!
displayName: 'Run a one-line script'
- script: |
wget https://github.com/adoptium/temurin16-binaries/releases/download/jdk-16.0.2%2B7/OpenJDK16U-jdk_x64_linux_hotspot_16.0.2_7.tar.gz
pwd
ls -lRt
displayName: 'Download jdk'
- task: JavaToolInstaller@0
inputs:
versionSpec: '16'
jdkArchitectureOption: 'x64'
jdkSourceOption: 'LocalDirectory'
jdkFile: 'OpenJDK16U-jdk_x64_linux_hotspot_16.0.2_7.tar.gz'
jdkDestinationDirectory: '/opt/jdkcustom'
cleanDestinationDirectory: true
- script: |
java -version
ls -lRt
pwd
ls $(Pipeline.Workspace)
git log --oneline | wc -l
I have installed adopt openjdk 16 using the task below.
steps:
- script: echo Hello, world!
displayName: 'Run a one-line script'
- script: |
wget https://github.com/adoptium/temurin16-binaries/releases/download/jdk-16.0.2%2B7/OpenJDK16U-jdk_x64_linux_hotspot_16.0.2_7.tar.gz
pwd
ls -lRt
displayName: 'Download jdk'
- task: JavaToolInstaller@0
inputs:
versionSpec: '16'
jdkArchitectureOption: 'x64'
jdkSourceOption: 'LocalDirectory'
jdkFile: 'OpenJDK16U-jdk_x64_linux_hotspot_16.0.2_7.tar.gz'
jdkDestinationDirectory: '/opt/jdkcustom'
cleanDestinationDirectory: true
- script: |
java -version
ls -lRt
pwd
ls $(Pipeline.Workspace)
I download the jdk with step 1 and installed it using step 2 "JavaToolInstaller@0"
I didn't find such for maven, can anyone please suggest.
I saw few links where I can tag my docker image using ${Build.SourceVersion} in azure devops pipeline.
But it is using the complete ID of the commit.
But I want to use only the short ID.
I mean this (2cc7968) instead of this (2cc79689fc29ad69698d3062688e2a650da62b8e)
How to get this?
My pipeline:
# Deploy to Azure Kubernetes Service
# Build and push image to Azure Container Registry; Deploy to Azure Kubernetes Service
# https://docs.microsoft.com/azure/devops/pipelines/languages/docker
trigger:
- master
resources:
- repo: self
variables:
# Container registry service connection established during pipeline creation
dockerRegistryServiceConnection: "685f0716-8b46-436e-8d2a-3d0ff987fce9"
imageRepository: "azuredevopssampleapp"
containerRegistry: "aksdevopsacrtesting.azurecr.io"
dockerfilePath: "**/Dockerfile"
tag: "$(Build.BuildId)"
imagePullSecret: "aksdevopsacrtesting458647f2-auth"
# Agent VM image name
vmImageName: "ubuntu-latest"
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: Docker@2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
- upload: pipeline_content/manifests
artifact: manifests
Is there any way to keep the pipeline file separately in a different repo than the one in source code?
So that, we can maintain all the pipelines related data in a separate repository but the pipeline should detect changes from the main repo.
Example:
I have a repo my-code-base and instead of creating pipeline in my-code-base, I will create a separate repo my-infra and save the pipeline files there.
But in the pipeline I should set the code should be picked from my-code-base.
Please suggest.
If it works, we can save all the azure devops CI/CD pipelines in my organization to be maintained separately.
I am using hyper-v as a hypervisor and installed ubuntu 21 on it.
There, whenever I start the VM, the eth0 (only one network interface connected,Default Switch of Hyper-v) is not getting IP and showing as down.
Please bare with my snaps, as I can't ssh to the machine to copy the output and paste here as text.
There is no /etc/network/interfaces file on my machine, as many forum questions modifying content on this file.
To make this network adapter up, I ran below command.
sudo ip link set eth0 up
Then, the network adapter is up now but without ip4 address.
To get ip address, I ran below command and it gets ip.
sudo dhclient eth0
Then, I get IPv4 address.
I need to do this every time I on the machine.
How to fix this?
How to get projects having admin users or list of them if multiple admin users on them using API?
I tried few links, but not working.
sonarqube version 8.9, I am using.
I am running ansible on centos machine
[ansadmin@ansible docker]$ ls
Dockerfile hosts simple-devops-image.yml webapp.war
[ansadmin@ansible docker]$ cat hosts
localhost
simple-devops-image.yml
---
- hosts: all
become: true
tasks:
- name: stop current running container
command: docker stop simple-devops-container
ignore_errors: yes
- name: remove stopped container
command: docker rm simple-devops-container
ignore_errors: yes
- name: remove docker image
command: docker rmi simple-devops-image
ignore_errors: yes
- name: build docker image using war
command: docker build -t simple-devops-image .
args:
chdir: /opt/docker
- name: create container using simple image
command: docker run -d --name simple-devops-container -p 8080:8080 simple-devops-image
Even on localhost I am getting permission denied.The user is already with sudo rights.
ansible-playbook -i hosts simple-devops-image.yml --check
PLAY [all] *************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************
fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ansadmin@localhost: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true}
PLAY RECAP *************************************************************************************************************
localhost : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
ping is working.
[ansadmin@ansible docker]$ ping localhost
PING localhost(localhost (::1)) 56 data bytes
64 bytes from localhost (::1): icmp_seq=1 ttl=64 time=0.024 ms
64 bytes from localhost (::1): icmp_seq=2 ttl=64 time=0.045 ms
64 bytes from localhost (::1): icmp_seq=3 ttl=64 time=0.045 ms
I have setup MDT using the steps provided in this link
When I am trying to edit the task sequence and add my own script, the ps1 file is not downloading on the target machine and not running.
Attached the screenshot.
I have tried using kustomize to load properties file as a configmap.
For that, I created a sample set as in github link.
With base files:
#kustomize build base
apiVersion: v1
data:
config: |-
dbport=1234
dcname=sfsdf
dbssl=false
locktime=300
domainuser=
kind: ConfigMap
metadata:
labels:
owner: sara
name: database-configmap
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
owner: sara
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
owner: sara
template:
metadata:
labels:
app: nginx
owner: sara
spec:
containers:
- image: nginx
name: nginx
With external file:
#kustomize build file
apiVersion: v1
data:
config: "dbport=156767\r\ndcname=dfsd\r\ndbssl=false\r\nlocktime=300\r\ndomainuser=somedts"
kind: ConfigMap
metadata:
labels:
env: dev
owner: sara
name: dev-database-configmap
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
env: dev
owner: sara
name: dev-nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
env: dev
owner: sara
template:
metadata:
labels:
app: nginx
env: dev
owner: sara
spec:
containers:
- image: nginx
name: nginx
If you observe the configmap |
is removed and also replaced by \r\n
as a single string.
How to fix this alignment?
My original YAML
base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
---
apiVersion: v1
kind: ConfigMap
metadata:
name: database-configmap
data:
config: |
dbport=1234
dcname=sfsdf
dbssl=false
locktime=300
domainuser=
base/Kustomization.yaml
resources:
- deployment.yaml
commonLabels:
owner: sara
From the parent folder of base:
kustomize build base
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
owner: sara
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
owner: sara
template:
metadata:
labels:
app: nginx
owner: sara
spec:
containers:
- image: nginx
name: nginx
If you observe above, the ConfigMap is being discarded, please suggest how to fix that.
I have a jenkins production job which involves multiple nodes/slaves and when we run the job it utilizes them based on the slave mentioned. And also, it has access to some machines shared folders.So from master, it is trying to access shared folders like \machine1\c$\sharefolder1 \machine2\c$\sharefolder2
So for my test environment, I installed jenkins and want to keep everything in a single slave. But,I don't want to modify the jobs.
So, I added entries in my local machine where jenkins is running and added entries to hosts file for machine1 and machine2. \localhost\c$ is opening but not with \machine1\c$ or \machine2\c$ even after adding entries to hosts file.
# localhost name resolution is handled within DNS itself.
127.0.0.1 localhost machine1 machine2
ping is redirecting to 127.0.0.1 loopback address only. But with localhost or 127.0.0.1 it is not prompting for password.But for machine1 or machine2 it is prompting for credentials and typing the machine's credentials is not working.
When I apply a label to Node, it is working as expected and the job able to pick this node.
But, if I apply multiple labels, it is not working. As I observe, it is taking both of them as single label.
Example: label: devbuild
It is working with the job.
But,
label: devbuild,installernode
It is not working for any of the jobs with label "devbuild" or "installernode" Even I tried with ; but same issue.
Please suggest how to apply multiple labels to single node.