Tech Me More

To quench our thirst of sharing knowledge about our day to day experience & solution to techincal problems we face in our projects.

Advertise with us !
Send us an email at diehardtechy@gmail.com

Thursday, December 17, 2020

PowerCLI scripts for VMware vCenter

For data centers with considerable virtual machines, changing the configuration for all/most VM's is a tedious task. For which VMware vCenter provides PowerCLI commands which can get be used to automate this flow. 

Changing or Switching the VLAN of Virtual Machines. 

Example:1 To change the VLAN for VMs starting with Centos-nginx- prefix

for (($i = 1500); $i -lt 2000; $i++)
{
Get-VM "Centos-nginx-$i" | Get-NetworkAdapter | Set-NetworkAdapter -NetworkName "vxw-dvs-100-virtualwire-8-sid-5005-arc-vm05" -Confirm:$false
}


Wednesday, May 27, 2020

Create MySQL server in microsoft azure using java SDK


Microsoft Azure is a leading cloud computing service provider. We can deploy, test, manage our applications in the azure. Azure offers a lot of managed services. MySql server is also one of the services provided by Azure. 

as a software developer we need to programmatically manage the Azure instance. Starting from creating a service, deploying new virtual machines, load balancers, etc. 

Azure provides a Java SDK to manage Azure instances programmatically. 

You can find more details about AZURE-JAVA-SDK  here.

Azure Java SDK provides many utilities to manage resources in Azure. however, as of today (27/5/20) there is no client provided in azure SDK for creating MySQL server. 

So, how to create a MySQL server instance in azure? 

Azure provides a separate maven dependency which is still WIP under the below maven repository. 

Please add the above dependency in POM.XML of your project. 

Steps to create MySQL server using the above maven dependency. 
Step 1: Add the below maven dependency in POM.xml 

Step 2: Entry class for MySQL server is MySQLManager. 
Below is the code example for creating a MySQL server using the above SDK.

public Server createMySQLServer(AzureTokenCredentials credential,
                                    AzureMySqlModel model) {

        if (credential == null || model == null)
            return null;

        Server server = null;
        if (model.validate()) {
            try {
                ServerPropertiesForDefaultCreate defaultProp = new ServerPropertiesForDefaultCreate();

                ServerPropertiesForCreate withVersion = defaultProp.withAdministratorLogin(
                            model.getAdministratorLogin()).withAdministratorLoginPassword(
                                        model.getAdministratorPassword()).withVersion(
                                                    model.getServerVersion());

                server = MySQLManager.configure().withLogLevel(
                            LogLevel.BODY).authenticate(credential,
                                        credential.defaultSubscriptionId()).servers().define(
                                                    model.getServerName()).withRegion(
                                                                model.getRegion()).withExistingResourceGroup(
                                                                            model.getResourceGroup()).withProperties(
                                                                                        withVersion).create();
            } catch (Exception ex) {
                log.error("Error creating MySQL server {}", ex.getMessage());
            }
        }

        return server;
    }


public AzureTokenCredentials getAzureTokenCredentials(String azureClientId,
                                                          String azureTenantId,
                                                          String azureSecret,
                                                          String azureSubscriptionId) {

        AzureTokenCredentials credentials = new ApplicationTokenCredentials(
                    azureClientId, azureTenantId, azureSecret,
                    AzureEnvironment.AZURE).withDefaultSubscriptionId(
                                azureSubscriptionId);

        return credentials;

}

AzureMySQLModel is a simple java POJO as defined below. 

import com.microsoft.azure.management.mysql.v2017_12_01.ServerVersion;

import lombok.AllArgsConstructor;
import lombok.Data;

@AllArgsConstructor
@Data
public class AzureMySqlModel {

    private String serverName;
    private String administratorLogin;
    private String administratorPassword;
    private String region;
    private String resourceGroup;
    private ServerVersion serverVersion;

    public boolean validate() {
        return (serverName == null | administratorLogin == null
                    | administratorPassword == null | serverVersion == null
                    | region == null | resourceGroup == null) ? false : true;

    }

}

You can get the AzureTokenCredentials via passing the azure client id, subscription id, tenant id, and azure secret in getAzureTokenCredentials() method. 

As of today the ServerVersion class only provides MySQL version 5.6 & 5.7. However we can pass the Azure supported MySQL server version by providing the custom implementation for the fromString method of ServerVersion class. 


Friday, May 15, 2020

OpenFeign : retry request based on custom HTTP response status


In the current distributed microservices architecture system while communicating with other microservices, we use multiple web service clients like feign, openfeign, rest template etc.  

Spring cloud uses openfeign under the hood. Graceful error handling is not only the need but the necessity for a robust system. In many instances, a microservice may return an unexpected response with response code like 40X, 50X, etc which means something has been messed up by end-user or there is a system error. In a likely scenario where you know the current error response may go away if you retry the request. 

Eg. en endpoint with /getDetails was throwing 50x error but after some time it will return the SUCCESS response. This can happen due to multiple factors like resource unavailability, client-side error, networking, etc. 


To overcome this problem programmer want to retry the request before throwing an exception. 

feign.codec.ErrorDecoder
Error decoder is the feign provided way to decode the error response. It has one method of

public Exception decode(String methodKey, Response response) 

the return type of this method is runtime Exception. if the exception is of type feign.RetryableException then feigns will retry this request. 

We can implement ErrorDecoder and provide our own implementation to this method. an example of this is listed below. 

package com.dht;

import java.util.Date;
import org.springframework.http.HttpStatus;
import feign.Response;
import feign.RetryableException;
import feign.codec.ErrorDecoder;
import lombok.extern.slf4j.Slf4j;

/**
 * Custom Error decoder for feign Retries the request again if response code is
 * 404
 */

@Slf4j
public class FeignErrorDecoder implements ErrorDecoder {

    private final ErrorDecoder defaultErrorDecoder = new Default();

    @Override
    public Exception decode(String methodKey,
                            Response response) {

        if (response.status() == HttpStatus.NOT_FOUND.value()) {
            log.info("Error while executing " + methodKey + " Error code "
                        + HttpStatus.NOT_FOUND);
            return new RetryableException(response.status(), methodKey, null,
                        new Date(System.currentTimeMillis()),
                        response.request());
        }
        return defaultErrorDecoder.decode(methodKey, response);

    }
}

The above decoder will return the Retryable exception if the response code is 404. We can customize it for our own purpose or logic. Please remember to pass response.request() and not null in the last parameter. 

We can also customize how many times we want to retry and the interval before retrying. Feign comes with feign. Retryer interface which we can override and provide our own implementation to customize the retry behavior. See the below example.


import java.util.concurrent.TimeUnit;

import com.vmware.eso.ops.apitest.constants.CasaConstants;

import feign.RetryableException;
import feign.Retryer;
import lombok.extern.slf4j.Slf4j;

/**
 * Custom feign retryer.
 */
@Slf4j
public class FeignRetyer implements Retryer {

    private final int maxAttempts;
    private final long backoff;
    int attempt;

    /**
     * Waits for 10
     * second before retrying.
     */
    public FeignRetyer() {
        this(10000, 5);
    }

    public FeignRetyer(long backoff,
                       int maxAttempts) {
        this.backoff = backoff;
        this.maxAttempts = maxAttempts;
        this.attempt = 1;
    }

    public void continueOrPropagate(RetryableException e) {

        if (attempt++ >= maxAttempts) {
            throw e;
        }

        try {
            TimeUnit.MILLISECONDS.sleep(backoff);
        } catch (InterruptedException ex) {

        }

        log.info("Retrying: " + e.request().url() + " attempt " + attempt);
    }

    @Override
    public Retryer clone() {
        return new FeignRetyer(backoff, maxAttempts);
    }
}

In the above example, we are retrying 5 times before throwing the exception and the interval between the request is 10 seconds.

Finally, we need to register both the above classes with Feign configuration bean. We can do this as follows.

    @Bean
    public FeignErrorDecoder feignErrorDecoder() {
        return new FeignErrorDecoder();
    }

    @Bean
    public Retryer retryer() {
        return new FeignRetyer();
    }

as per the above example, we will see requests being tried 5 times before failing. 

Happy retrying & coding. 

Wednesday, April 22, 2020

How to run netflix eureka with one micro-service app in docker-container using docker-compose

Distributed systems are the need of the current time. From a faster, independent, resilient, highly available application it is not only necessary to adapt to newer technologies to build application logic but deploying them fast is also a core use case.

Spring boot is a highly used framework for developing microservices in the Java ecosystem. With dockers & Kubernetes it is also easy to deploy them anywhere. 

In this post, I will explain how can netflix-eureka and the microservice app communicate with each other in a container environment. 

For this post, I am using two images. 
  • Netflix-eureka : image for Netflix-eureka server listening on port 8762
  • Support-System: A simple microservice application running on default port 8080

for communication with netflix-eureka we need to provide netflix eureka.client.serviceUrl.defaultZone=http://eureka-server:8762/eureka property in our application, so that it understands where the netflix eureka service is running. 

Having it hard coded is a problem and it is discouraged for the stateless purpose. How do we solve this problem when both the applications are running in different containers. 

To solve the above problem we use the docker-compose and link two services together. 

docker-compose.yml


eureka-server:
    image: devakash/eureka:latest
    ports:
      - '80:8762'
    restart: always
        
supportsystem:
    image: devakash/supportsystem:latest
    ports:
      - '8080:8080'
    environment:
        - eureka.client.serviceUrl.defaultZone=http://eureka-server:8762/eureka
        - email.fromemail=username@gmail.com
        - email.password=Password
    links:
        - eureka-server
    restart: always

in the above file please notice the link key. Here we are linking our micro-service to eureka-server. Also, notice the defaultZone environment variable where in place of IP we are using the name of the container such as eureka-server which is nothing but an identifier for netflix-eureka server. 

Netflix-eureka default port is 8761 but we can change it using the below setting in application.properties file of netflix-eureka microservice application. 

server.port=8762

eureka.client.serviceUrl.defaultZone: http://localhost:${server.port}/eureka/

for more details please refer: https://hub.docker.com/r/devakash/supportsystem


Monday, April 20, 2020

How to set up Kubernetes cluster in freshly deployed Ubuntu virtual machine

Setup kubernetest cluster in ubuntu virtual machine

Kubernetes is the best available orchestration tool for dockers. To set up Kubernetes in the ubuntu box please follow the below steps. Docker is the prerequisite to install Kubernetes. The steps provided below will take care of the docker installation before proceeding with the Kubernetes installation.

Procedure
Prerequisite: Have root access to the ubuntu machine.
  1. ssh to ubuntu virtual machine. 
  2. Run the command apt-get update.
  3. Copy the script from Script Content as displayed below and save it as install_kubernetes.sh
  4. Edit the script and change the apiserver-advertise-address in the script.
  5. To get the IP address just run ip addr
  6. Run the script in the ubuntu machine as sh install_kubernetes.sh
Script Content:

#!/bin/bash

echo "installing docker"
apt-get update
apt-get install -y \
    apt-transport-https \
    ca-certificates \
    curl \
    software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository \
   "deb https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
   $(lsb_release -cs) \
   stable"
apt-get update && apt-get install -y docker-ce=$(apt-cache madison docker-ce | grep 17.03 | head -1 | awk '{print $3}')

echo "installing kubernetes"
apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet=1.13.5-00 kubeadm=1.13.5-00 kubectl=1.13.5-00

echo "deploying kubernetes (with calico)..."
kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address="172.31.X.X" 
export KUBECONFIG=/etc/kubernetes/admin.conf



kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml


The output of the above command will be like below: Please save the below output for future use. 

master_install_output 

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.149.X.X:6443 --token 6zqewj.0aycerc78v3es6gk \
    --discovery-token-ca-cert-hash sha256:4aa355c1340feccabaceda0ebaeab0996e040c998ed6255d8ec2357cf66e

Please run the above 3 commands on the master node.
Once done you can verify the installment using the below command.

kubectl cluster-info 
Output:

Now the cluster is set up with one node i.e. master, verify this information by running the command.

kubectl get nodes  
Output :


Add Nodes to Kubernetes cluster 


Now to add other nodes in this cluster, setup another ubuntu machine. 

Run the below script in that ubuntu machine. 

#!/bin/bash
echo "installing docker"
apt-get update
apt-get install -y \
    apt-transport-https \
    ca-certificates \
    curl \
    software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository \
   "deb https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
   $(lsb_release -cs) \
   stable"
apt-get update && apt-get install -y docker-ce=$(apt-cache madison docker-ce | grep 17.03 | head -1 | awk '{print $3}')

echo "installing kubeadm and kubectl"
apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet=1.13.5-00 kubeadm=1.13.5-00 kubectl=1.13.5-00


after running the above script , copy the kubeadm join comment which we stored as part of the master node installation. as depicted under master_install_output heading , and run it in second ubuntu machine.


kubeadm join 10.149.X.X:6443 --token 6zqewj.0aycerc78v3es6gk \
    --discovery-token-ca-cert-hash sha256:4aa355c1340feccabaceda0ebaeab0996e040c998ed6255d8ec2357cf66e

This will join this machine to our already setup cluster. 

Wednesday, July 17, 2019

How to send attachment in Slack channel using Restful Webservice



Slack : What comes to your mind when you hear word Slack ? It is probably one of best collaboration tool available in the market. Now a days Slack is also widely used across multiple organization of all level. [Big, mid or small] 


Now since everyone is active on Slack , it is of extreme importance to send messages to slack channels. This is when the need arise to send automated notifications. Slack provides a lot of rest api's to accomplish this task. 

In this post, We will cover how to send attachments in Slack channels using Slack endpoint with Java. 

Prerequisite: 
  • You should have slack authentication token generated. 
  • Name of the slack channel where the message will be delivered. 



package com.dht.test;

import java.io.File;

import org.apache.http.HttpResponse;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.entity.mime.MultipartEntityBuilder;
import org.apache.http.impl.client.HttpClientBuilder;

public class SlackChannelAttachmentUtil{

 public static void main(String[] args) {
  try {
   String url = "https://slack.com/api/files.upload";
   HttpClient httpclient = HttpClientBuilder.create().disableContentCompression().build();
   HttpPost httppost = new HttpPost(url);
   MultipartEntityBuilder reqEntity = MultipartEntityBuilder.create();
   reqEntity.addBinaryBody("file", new File("C:\\dht.properties"));
   reqEntity.addTextBody("channels", "attachment-testing");
   reqEntity.addTextBody("token","xoxp-4735837518-xxxxxx");
   reqEntity.addTextBody("media", "file");
   reqEntity.addTextBody("initial_comment", "Hello This is from code");

   httppost.setEntity(reqEntity.build());
   HttpResponse execute = httpclient.execute(httppost);
   System.out.println(execute.getStatusLine().getReasonPhrase());
   System.out.println(execute.getStatusLine().getStatusCode());
  } catch (Exception e) {
   System.out.println(e);
  }

 }

}



The above code will send dht.properties file to attachment
-testing slack channel. You will also see Message "Hello This is from code" along with attachment.

Tuesday, May 21, 2019

How to fix VersionOne.Parsing.MismatchedCharException !

Are you irritated because of VersionOne.Parsing.MismatchedCharException while accessing version one restful API & using where clause. 



CAUSES

Due to the use of the special character in version1 restful endpoint, this error is common. 
The 'where' clauses in a rest-1.v1 query or query.v1 query only support ASCII. So any character which is non-ascii or Unicode will result in the this error. 

SOLUTION


The 'with' clause should be used by default to avoid unsupported character issues.

It should look something like this -

Scopes.Name=$name&with=$name= XXXXX: XXXXX



Friday, May 17, 2019

Install Jenkins with Docker file & preinstalled plugins. [Copy existing .jpi/.hpi file to docker image.]


Create a new file with name as Dockefile and add the below content.

FROM jenkins/jenkins:lts
USER root
MAINTAINER diehardtechy@gmail.com


if you want to copy any existing .hpi/.jpi file in your docker image, please add the below command in Dockerfile (Eg. If we want to copy perforce.jpi plugin from local machine to docker image )

COPY perforce.jpi /var/jenkins_home/plugins/



If you want Jenkins to preinstall plugins create a plugins.txt file with below content.

pluginid : version_info (For example if you want to install notification plugin with 1.13 version, we need to specify it as below, if we don’t pass version info, it will take the latest version)

notification:1.13 
ws-cleanup:0.34 
mask-passwords:2.12.0


Content of the plugins.txt file looks like as follow.






to copy the plugins.txt file to docker image, add the below command in Dockerfile.