Pakistan's First Oracle Blog

Subscribe to Pakistan's First Oracle Blog feed
Blog By Fahd Mirza Chughtai
Updated: 8 hours 50 min ago

How to Enable SSH Equivalency Between EC2 Instances

Tue, 2021-04-20 01:55

 If you want to login to a Linux instance from other Linux instance without password or without mentioning the key, then ssh equivalency is the solution. 

Normally, in order to generate ssh equivalency between 2 Linux instances, you create both public and private keys, then copy them over to other instance and add it to authorized_keys file etc. 

But in EC2 instance in AWS, you have to create or specify the keys during the launch time of instance. When you launch an EC2 instance, public keys are already present in home directory of the user. For example, for Amazon Linux , the public key would be already present in /home/ec2-user/.ssh/authorized_keys file. That is why, you only need the private key to ssh into that server.

Let's say you have another EC2 instance which is Linux based and you want to establish ssh equivalency between these two instances. Let's suppose both are using the same key-pair. It means that both would already have public key present in their /home/ec2-user/.ssh/authorized_keys file. In that case all you need to do is following on both servers to establish ssh equivalency:


1- Login to Instance 1

2- Go to /home/ec2-user/.ssh/ directory

3- Touch a new file

touch id_rsa

chmod 700 id_rsa

4- Copy the content of your pem key and paste it into this id_rsa file

Now you should be able to ssh to the other server, which has the same keypair.

Repeat above steps on other server if you want to enable reverse ssh equivalency.

Categories: DBA Blogs

Where to Put PostgreSQL in AWS

Thu, 2021-04-15 22:44

When it comes to putting PostgreSQL database in AWS, you are spoiled for choice. There are 3 ways to do that:



1) Install and configure PostgreSQL on EC2 instance.

2) Amazon RDS for PostgreSQL

3) Amazon Aurora for PostgreSQL

You can watch the whole video here.

Categories: DBA Blogs

One Reason to Run Oracle on Google Cloud Platform

Wed, 2021-03-17 02:29

There is one reason to run Oracle on Google Cloud Platform, one solid and compelling reason. It has nothing to do with cost, and it has nothing to do with performance.

In all fairness, you can get cost savings (or not) with any of cloud provider in terms of software and hardware. But if you are or have to run Oracle, then probably cost is not your issue. For me, one differentiating reason is presence of Google Big Query in GCP. 

A serverless, fastest, easiest and very powerful data warehouse GCP BQ is an attraction of its own if you compare it to other competing cloud offerings. I am observing more and more companies drawing to GCP just to use BQ as unified warehouse of their data. Companies are using ETL, ELT tools and flows to push data into BQ from all sorts of databases and data stores on AWS, OCI and Azure. 

So if you have a choice, then why not put your Oracle database on GCP VM using their bare metal? If you even mention that to your GCP sales rep, very strong chances are that he will get a very good discount for you. Be sure to mention that you intend to integrate other GCP services with that Oracle database in the future and you might get bare metal for free. That's my guess, but there is no harm in trying.


Categories: DBA Blogs

Compartments in OCI

Sat, 2021-03-13 21:23

 One of my favorite concepts in Oracle Cloud Infrastructure (OCI) is compartments. If you have worked in AWS, at first they may seem redundant and cumbersome, but contrary to that, they are quite useful and make things less cluttered. 

I think if AWS would get a chance to reorganize their cloud governance model, they might also introduce something like that but then they don't like to copy thing. 

Compartment is used to organize your cloud resources like compute instances, buckets, etc. Compartments are a global concept and they span multiple regions. You can connect your resources across your regions within the same compartment.

The OCI account is called as Tenancy. When you create a tenancy, you also get a default compartment which is called as 'root compartment'. Of course, you can also create many other compartments too.

One of the biggest advantage of OCI compartment is that they enable you to do cost control of your cloud resources. You can assign budgets, quotas, and cost tags to the compartment and its resources. You can attach policies to them and that enable you to control the access in a unified and centralized way. All you have to do is to design the layout of the resources.

Categories: DBA Blogs

Solution of Nuget Provider Issue with PowerShell and AWS Tools

Wed, 2021-02-24 20:08

 On a AWS EC2 Windows 2012 server, my goal was to write some data to S3 bucket. I was using a small Powershell Script to copy the file to the S3 bucket. For that I needed to Install AWS Tools for Powershell and I used following command at Powershell prompt running as administrator:

Windows PowerShell

Copyright (C) 2016 Microsoft Corporation. All rights reserved.


PS C:\Users\SRV> Install-Module -Scope CurrentUser -Name AWSPowerShell.NetCore -Force

and it failed with following error:

NuGet provider is required to continue

PowerShellGet requires NuGet provider version '2.8.5.201' or newer to interact with NuGet-based repositories. The NuGet

 provider must be available in 'C:\Program Files\PackageManagement\ProviderAssemblies' or

'C:\Users\SRV\AppData\Local\PackageManagement\ProviderAssemblies'. You can also install the NuGet provider

by running 'Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force'. Do you want PowerShellGet to install

 and import the NuGet provider now?

[Y] Yes  [N] No  [S] Suspend  [?] Help (default is "Y"): y

WARNING: Unable to download from URI 'https://go.microsoft.com/fwlink/?LinkID=627338&clcid=0x409' to ''.

WARNING: Unable to download the list of available providers. Check your internet connection.

PackageManagement\Install-PackageProvider : No match was found for the specified search criteria for the provider

'NuGet'. The package provider requires 'PackageManagement' and 'Provider' tags. Please check if the specified package

has the tags.

Solution:

The solution is to enable TLS 1.2 on this Windows host, which you can do by running Powershell in administrator mode:


Set-ItemProperty -Path 'HKLM:\SOFTWARE\Wow6432Node\Microsoft\.NetFramework\v4.0.30319' -Name 'SchUseStrongCrypto' -Value '1' -Type DWord


Close your Powershell window, and reopen as administrator and check if TLS protocol is present by typing following command on PS prompt:

[Net.ServicePointManager]::SecurityProtocol

If the above shows Tls12 in the output, then we are all good and now you should be able to install AWS Tools.

I hope that helps.




Categories: DBA Blogs

Boto3 Dynamodb TypeError: Float types are not supported. Use Decimal types instead

Mon, 2021-02-22 01:26

 I was trying to ram data into AWS dynamodb via Boto3 and the streaming failed due to following error:


  File "C:\Program Files\Python37\lib\site-packages\boto3\dynamodb\types.py", line 102, in serialize

    dynamodb_type = self._get_dynamodb_type(value)

  File "C:\Program Files\Python37\lib\site-packages\boto3\dynamodb\types.py", line 115, in _get_dynamodb_type

    elif self._is_number(value):

  File "C:\Program Files\Python37\lib\site-packages\boto3\dynamodb\types.py", line 160, in _is_number

    'Float types are not supported. Use Decimal types instead.')

TypeError: Float types are not supported. Use Decimal types instead.



I was actually getting some raw data points from cloudwatch for later analytics. These datapoints were in float format which are not supported by Dynamodb. Now instead of importing some decimal libraries or doing JSON manipulation, you can solve above with simple Python format expression like this:

"{0:.2f}".format(datapoint['Average'])

It worked like a charm afterwards. I hope that helps.
Categories: DBA Blogs

Main SQL Window Functions for Data Engineers in Cloud

Fri, 2021-02-19 22:36

 To become a data engineer in cloud requires to have a good grasp of SQL among various other things. SQL is the premier tool for interacting with data sets. At first it seems daunting to see all those SQL analytics functions, but if you start with a tiny dataset like in the examples below and understand how these functions work, then it all becomes very easy for large datasets of any volume.

Once you know the basic structure of SQL, understand the basic clauses, then its time to jump into the main analytics functions. Below I have used SQL's With clause to generate a tiny dataset in Oracle. You don't have to create a table, load it with sample data and play with it. Just use with clause with the accompanying select statements which demonstrate you the common SQL Window functions.


1- In this example, sum and row_number functions works on each row of whole window.

   

With x as ( 

   SELECT 'tom' as name, 1 AS t from dual

   UNION ALL

   SELECT 'harry' as name,2 AS t  from dual

   UNION ALL

   SELECT 'jade' as name,2 AS t  from dual

   UNION ALL

   SELECT 'ponzi' as name,3 AS t  from dual

)

select name,t,sum(t) over () as SumEachRow, row_number() over (order by t) as RN from x;


2- In this example, sum and row_number functions works on each row of each partition of whole window. This window is partitioned on column t.


With x as ( 

   SELECT 'tom' as name, 1 AS t from dual

   UNION ALL

   SELECT 'harry' as name,2 AS t  from dual

   UNION ALL

   SELECT 'jade' as name,2 AS t  from dual

   UNION ALL

   SELECT 'ponzi' as name,3 AS t  from dual

)

select name,t,sum(t) over (partition by t) as SumEachRow, row_number() over (partition by t order by t) as RN from x;


3- In following example, we have divided the window into 2 partitions by using case statement within partition clause. One partition is when t=1, and other partition is composed of rest of rows.


With x as ( 

   SELECT 'tom' as name, 1 AS t from dual

   UNION ALL

   SELECT 'harry' as name,2 AS t  from dual

   UNION ALL

   SELECT 'jade' as name,2 AS t  from dual

   UNION ALL

   SELECT 'ponzi' as name,3 AS t  from dual

)

select name,t,sum(t) over (partition by CASE WHEN t = 1 THEN t ELSE NULL END) as SumEachRow, row_number() over (partition by CASE WHEN t = 1 THEN t ELSE NULL END order by t) as RN from x;


4- Below example is variant of example 3. In this the window function row_number is working on whole window instead of partition whereas the window function sum is working on partitions.


With x as ( 

   SELECT 'tom' as name, 1 AS t from dual

   UNION ALL

   SELECT 'harry' as name,2 AS t  from dual

   UNION ALL

   SELECT 'jade' as name,2 AS t  from dual

   UNION ALL

   SELECT 'ponzi' as name,3 AS t  from dual

)

select name,t,sum(t) over (partition by CASE WHEN t = 1 THEN t ELSE NULL END) as SumEachRow, row_number() over (order by t) as RN from x;


5- This example uses lag function to return previous value of window function. For lag function, the value for first row is always null as there is no previous value.


With x as ( 

   SELECT 'tom' as name, 1 AS t from dual

   UNION ALL

   SELECT 'harry' as name,2 AS t  from dual

   UNION ALL

   SELECT 'jade' as name,2 AS t  from dual

   UNION ALL

   SELECT 'ponzi' as name,3 AS t  from dual

)

select name,t,lag(t) over (order by t) as Previous_t from x;


6- This example uses lead function to return next value of window function. For lead function, the value of last row is always null as there is no next value.


With x as ( 

   SELECT 'tom' as name, 1 AS t from dual

   UNION ALL

   SELECT 'harry' as name,2 AS t  from dual

   UNION ALL

   SELECT 'jade' as name,2 AS t  from dual

   UNION ALL

   SELECT 'ponzi' as name,3 AS t  from dual

)

select name,t,lead(t) over (order by t) as Next_t from x;


7- This example shows that First_value function returns first value in window for each row.


With x as ( 

   SELECT 'tom' as name, 1 AS t from dual

   UNION ALL

   SELECT 'harry' as name,2 AS t  from dual

   UNION ALL

   SELECT 'jade' as name,2 AS t  from dual

   UNION ALL

   SELECT 'ponzi' as name,3 AS t  from dual

)

select name,t,first_value(t) over (order by t) as First_t from x;


8- This example shows that First_value function returns first value in each partition of window for each row.

With x as ( 

   SELECT 'tom' as name, 1 AS t from dual

   UNION ALL

   SELECT 'harry' as name,2 AS t  from dual

   UNION ALL

   SELECT 'jade' as name,2 AS t  from dual

   UNION ALL

   SELECT 'ponzi' as name,3 AS t  from dual

)

select name,t,first_value(t) over (partition by t order by t) as First_t from x;


9- This example shows that last_value function returns last value in window for each row.


With x as ( 

   SELECT 'tom' as name, 1 AS t from dual

   UNION ALL

   SELECT 'harry' as name,2 AS t  from dual

   UNION ALL

   SELECT 'jade' as name,2 AS t  from dual

   UNION ALL

   SELECT 'ponzi' as name,3 AS t  from dual

)

select name,t,last_value(t) over (order by t ROWS BETWEEN

           UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as Last_t from x;


10- This example shows that Last_value function returns last value in each partition of window for each row.

With x as ( 

   SELECT 'tom' as name, 1 AS t from dual

   UNION ALL

   SELECT 'harry' as name,2 AS t  from dual

   UNION ALL

   SELECT 'jade' as name,2 AS t  from dual

   UNION ALL

   SELECT 'ponzi' as name,3 AS t  from dual

)

select name,t,last_value(t) over (partition by t order by t ROWS BETWEEN

           UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as Last_t from x;


For explanation of rows between unbounded clause, see this 

11- This example shows the rank() function which is useful for Top N, or Bottom N sort of queries. Following is for whole window. The main idea is that rank starts from 1 from first row and then rank remains same for rows with same value within window. When value changes, the rank increments as per number of lines from top. 

With x as ( 

   SELECT 'tom' as name, 1 AS t from dual

   UNION ALL

   SELECT 'harry' as name,2 AS t  from dual

   UNION ALL

   SELECT 'jade' as name,2 AS t  from dual

   UNION ALL

   SELECT 'ponzi' as name,3 AS t  from dual

)

select name,t,rank(t) over (order by t) as Rank from x;


12- This example shows the rank() function which is useful for Top N, or Bottom N sort of queries. Following is for each partition of window.


With x as ( 

   SELECT 'tom' as name, 1 AS t from dual

   UNION ALL

   SELECT 'harry' as name,2 AS t  from dual

   UNION ALL

   SELECT 'jade' as name,2 AS t  from dual

   UNION ALL

   SELECT 'ponzi' as name,3 AS t  from dual

)

select name,t,rank() over (partition by t order by t) as Rank from x;


PS. Yes I know formatting of code chunks is not good enough but this is limitation of blogger platform it seems and another note to self that I need to move to a better one.

Categories: DBA Blogs

Docker Behind Proxy on CentOS - Solution to Many Issues

Thu, 2021-01-28 22:50

If you running docker behind proxy on CentOS and receiving timeout or network errors, then use below steps to configure proxy settings on your CentOS box where docker is installed and you are trying to build docker image:

Login as the user which is going to build image


Create directory with sudo

    Sudo mkdir -p /etc/systemd/system/docker.service.d


Create file for http proxy setting

    /etc/systemd/system/docker.service.d/http-proxy.conf

    and insert following content into it:

    [Service]

    Environment="HTTP_PROXY=http://yourproxy.com:80/"


Create file for https proxy setting

    /etc/systemd/system/docker.service.d/https-proxy.conf

    and insert following content into it:

    [Service]

    Environment="HTTPS_PROXY=https://yourproxy.com:80/"


Restart the systemctl daemon

systemctl daemon-reload


Restart the docker:

service docker restart


Also if you are trying to install Yarn or NPM within your dockerfile , then within your docker file define following environment variables

ENV http_proxy=http://yourproxy.com

ENV https_proxy=http://yourproxy.com

ENV HTTP_PROXY=http://yourproxy.com

ENV HTTPS_PROXY=http://yourproxy.com


Notice that only specify http protocol both for https and http proxy. 

I hope that helps.

Restart docker again.


Categories: DBA Blogs

Most Underappreciated AWS Service and Why

Tue, 2021-01-05 17:11

Who wants to mention in their resume that one of their operation task is to tag the cloud resources? Well I did and mentioned that one of the tools I used for that purpose was Tag Editor. Interviewer was surprised to learn that there was such a thing in AWS which allowed tagging multiple resource at once. I got the job due to this most under-appreciated and largely unknown service.

Tagging is boring but essential. As cloud matures, tagging is fast becoming an integral part of it. In the environments I manage, most of tagging management is automated but there is still a requirement at times for manual bulk tagging and that's where Tag Editor comes very handy. Besides of bulk tagging Tag Editor enables you to search for the resources that you want to tag, and then manage tags for the resources in your search results.

There are various other tools available from AWS to ensure tag compliance and management but the reason why I like Tag Editor most is its ease of use and a single pane of window to search resources by tag keys, tag values, region or resource types. It's not as glamorous as AWS Monitron, AWS Proton or AWS Fargate but as useful as any other service is.

In our environment, if its not tagged then its not allowed in the cloud. Tag Editor addresses the basics of being in cloud. Get it right, and you are well on your way to well-architected cloud infrastructure.

Categories: DBA Blogs

From DBA to DBI

Mon, 2020-10-19 18:48

Recently Pradeep Parmer at AWS had a blog post about transitioning from DBA to DBI or in other words from database administrator to database innovator. I wonder what exactly is the difference here as any DBA worth his or her salt is an innovator in itself.

Administering a database is not about sleepily issuing backup commands or in terms of Cloud managed databases clicking here and there. Database administration has evolved over time just like other IT roles and is totally different what it was few years back. 

Regardless of the database engine you use, you have to have a breadth of knowledge about operating systems, networking, automation, scripting, on top of database concepts. With managed database services in cloud like AWS RDS or GCP Cloud SQL or Big Query many of the skills have become outdated but new ones have sprung up. That has always  been the case with DBA field. 

Taking the example of Oracle; what we were doing in Oracle 8i became obsolete in Oracle 11g and Oracle 19c  is a totally different beast. Oracle Exadata, RAC, various types of DR services, fusion middleware are in itself a new ballgame with every version. 

Even with managed database services, the role of DBA has become more involved in terms of migrations and then optimizing what's running within the databases from stopping the database costs going through the roof.

So the point here is that DBAs have always been innovators. They have always been trying to find out new ways to automate the management and healing of their databases. They always are under the pressure to eke out last possible optimization out of their system and that's still the case even if those databases are supposedly managed by cloud providers. 

With purpose built databases which are addressed different use case for different database technology the role of DBA has only become more relevant as they have to evolve to address all this graph, in-memory, and other cool nifty types of databases.

We have always been innovators my friend. 

Categories: DBA Blogs

What is Purpose Built Database

Mon, 2020-10-05 17:30

 In simple words, a general Database Engine is a big clunky piece of software with features for all the use cases, and its up to you to choose which features to use. Whereas in a purpose built database, you get a lean, specific database which is only suitable for the feature you want.

For instance, AWS offers 15 purpose-built database engines including relational, key-value, document, in-memory, graph, time series, and ledger databases. GCP also provides multiple databases types like Spanner, BigQuery etc. 

But the thing is that the one-size-fits-all monolithic databases aren't going anywhere. They are here to stay. A medium to large organization has way too many requirements and features to be used and having one database for every use case increases the footprint and cost. For every production database, there is a dev, test, and QA database so the foot print keeps increasing.

So the thing is that though having purpose built database notion is great it's not going to throw monilithic database out of the window. It just provides another option for the organization and they could just have a managed service for purpose built database for a specialized use case but for a general database requirement for OLTP and data warehouse, monilithic is still the way.

Categories: DBA Blogs

5 Important Steps Before Upgrading Oracle on AWS RDS

Sat, 2020-09-26 23:03

 Even though AWS RDS (relational database service) is a managed service which means that you won't have to worry about upgrades, patches and other tidbits, you still have the option of manually triggering the upgrade at time of your choice.

Upgrading an Oracle database is quite critical not only for the database itself but more importantly for the dependent applications. It's very important to try out any upgrade on RDS on a test representative system before hand to iron out any wrinkles and check the timings and any other potential issues. 

There are 5 important steps before upgrading Oracle on AWS RDS you can take to make this process more risk-free, speedy, and reliable:

  1. Check Invalid objects such as procedures, functions, packages etc in your database.
  2. Make a list of the objects which are still invalid and if possible delete them to remove clutter.
  3. Disable and remove audit logs if they are stored in database
  4. Convert dbms_jobs Jobs and other stuff to dbms_scheduler
  5. Take Snapshot of your production database right before you upgrade to speed up the upgrade process as then during upgrade only delta snapshot will be taken
I hope that helps.

Categories: DBA Blogs

Choice State in AWS Step Functions

Thu, 2020-09-17 02:47

Richly asynchronous server-less applications can be built by using AWS step functions. Choice State in AWS Step Functions is the newest feature which was long awaited.

In simply words, we define steps and their transitions and call it State Machine as a whole. In order to define this state machine, we use Amazon States Language (ASL). ASL is a JSON-based structured language that defines state machines and collections of states that can perform work (Task states), determines which state to transition to next (Choice state), and stops execution on error (Fail state). 

So if the requirement is to add a branching logic like if-then-else or case statement in our state transition, then Choice state comes handy. The choice state introduces various new operators into the ASL and the sky is now limit with the possibilities. Operators for choice state include comparison operators like Is Null, IsString etc, Existence operators like Ispresent, glob wildcards where you match some string and also variable string comparison.

Choice State enables developers to simplify existing definitions or add dynamic behavior within state machine definitions. This makes it easier to orchestrate multiple AWS services to accomplish tasks. Modelling complex workflows with extended logic is now possible with this new feature.

Now one hopes that AWS introduces doing it all graphically instead of dabbling into ASL.

Categories: DBA Blogs

CloudFormation Template for IAM Role with Inline Poicy

Tue, 2020-08-18 21:10
I struggled with this a bit to create a cloudformation template for IAM role with inline policy with IAM user as principal. So here it is as a quick reference:


    AWSTemplateFormatVersion: 2010-09-09
Parameters:
vTableName:
Type: String
Description: the tablename
Default: arn:aws:dynamodb:ap-southeast-2:1234567:table/test-table
vUserName:
Type: String
Description: New account username
Default: mytestuser
Resources:
DynamoRoleForTest:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
AWS:
- !Sub 'arn:aws:iam::${AWS::AccountId}:user/${vUserName}'
Action:
- sts:AssumeRole
Path: /
Policies:
- PolicyName: DynamoPolicy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- dynamodb:BatchGet*
- dynamodb:DescribeStream
- dynamodb:DescribeTable
- dynamodb:Get*
- dynamodb:Query
- dynamodb:Scan
Resource: !Ref vTableName
I hope that helps. Thanks.
Categories: DBA Blogs

How to Read Docker Inspect Output

Fri, 2020-08-14 21:52

Here is quick easy set of instructions as how to read docker inspect output:

First you run the command:

docker inspect <image id> or <container id>

and then it outputs in JSON format. Your normally are interested in what exactly is in this docker image which you have just pulled from web or inherited in your new job. 

Now copy this JSON output and put it in VSCode or any of online JSON editor of your choice. For a quick glance, look at the node "ContainerConfig." This node tells you what exactly was run within the temporary container which was used to build this image such as CMD, EntryPoint etc. 

In addition to the above, following is the description of all the important bits of information found in Inspect command output:

  • ID: It's unique identifier of the image.
  • Parent: A link to the identifier of the parent image of this image. 
  • Container: The temporary container created when the image was built.
  • ContainerConfig: Contains what happened in that temporary container.
  • DockerVersion: Version of Docker used to create the image

Virtual Size: Image size in bytes.

I hope that helps.

Categories: DBA Blogs

Installing Docker on Amazon Linux 2

Thu, 2020-08-13 00:50
Installing docker on Amazon Linux 2 is full of surprises which are not easy to deal with. I just wanted to test something within a container environment, so spun up a new EC2 instance from the following AMI:

Amazon Linux 2 AMI (HVM), SSD Volume Type - ami-0ded330691a314693 (64-bit x86) / ami-0c3a4ad3dbe082a72 (64-bit Arm)

After this Linux instance came up, I just did yum update to get all the latest stuff:

 sudo yum update

All good so far.
Then I installed/checked yum-utils and grabbed the docker repo, and all good there:

[ec2-user@testf ~]$ sudo yum install -y yum-utils
Loaded plugins: extras_suggestions, langpacks, priorities, update-motd
Package yum-utils-1.1.31-46.amzn2.0.1.noarch already installed and latest version
Nothing to do

[ec2-user@testf ~]$ sudo yum-config-manager \
>     --add-repo \
>     https://download.docker.com/linux/centos/docker-ce.repo
Loaded plugins: extras_suggestions, langpacks, priorities, update-motd
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo


Now, it's time to install docker:

[ec2-user@testf ~]$ sudo yum install docker-ce docker-ce-cli containerd.io
Loaded plugins: extras_suggestions, langpacks, priorities, update-motd
amzn2-core                                                                                                               | 3.7 kB  00:00:00
docker-ce-stable                                                                                                         | 3.5 kB  00:00:00
(1/2): docker-ce-stable/x86_64/primary_db                                                                                |  45 kB  00:00:00
(2/2): docker-ce-stable/x86_64/updateinfo                                                                                |   55 B  00:00:00
Resolving Dependencies
--> Running transaction check
---> Package containerd.io.x86_64 0:1.2.13-3.2.el7 will be installed
--> Processing Dependency: container-selinux >= 2:2.74 for package: containerd.io-1.2.13-3.2.el7.x86_64
---> Package docker-ce.x86_64 3:19.03.12-3.el7 will be installed
--> Processing Dependency: container-selinux >= 2:2.74 for package: 3:docker-ce-19.03.12-3.el7.x86_64
--> Processing Dependency: libcgroup for package: 3:docker-ce-19.03.12-3.el7.x86_64
---> Package docker-ce-cli.x86_64 1:19.03.12-3.el7 will be installed
--> Running transaction check
---> Package containerd.io.x86_64 0:1.2.13-3.2.el7 will be installed
--> Processing Dependency: container-selinux >= 2:2.74 for package: containerd.io-1.2.13-3.2.el7.x86_64
---> Package docker-ce.x86_64 3:19.03.12-3.el7 will be installed
--> Processing Dependency: container-selinux >= 2:2.74 for package: 3:docker-ce-19.03.12-3.el7.x86_64
---> Package libcgroup.x86_64 0:0.41-21.amzn2 will be installed
--> Finished Dependency Resolution
Error: Package: containerd.io-1.2.13-3.2.el7.x86_64 (docker-ce-stable)
           Requires: container-selinux >= 2:2.74
Error: Package: 3:docker-ce-19.03.12-3.el7.x86_64 (docker-ce-stable)
           Requires: container-selinux >= 2:2.74
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest


and it failed. 

So googled the error Requires: container-selinux and every stackoverflow post and blogs say to download the new rpm from some centos or similar mirror but it simply doesn't work, no matter how hard you try. 

Here is the ultimate best solution which enabled me to get docker installed on Amazon Linux 2 on this EC2 server:

sudo rm /etc/yum.repos.d/docker-ce.repo

sudo amazon-linux-extras install docker

sudo service docker start

[ec2-user@~]$ docker --version

Docker version 19.03.6-ce, build 369ce74


That's it. I hope that helps.
Categories: DBA Blogs

Quick Intro to BOTO3

Mon, 2020-08-10 03:37

 I just published my very first tutorial video on youtube which lists down a quick introduction to AWS BOTO3 with a step by step walkthrough of a simple program. Please feel free to subscribe to my channel. Thanks. You can find video here.

Categories: DBA Blogs

Checklist While Troubleshooting Workload Errors in Kubernetes

Fri, 2020-08-07 02:21

 Following is the checklist while troubleshooting workload/application errors in Kubernetes:

1- First check how many nodes are there

2- What namespaces are present

3- In which namespace , the faulty application is

4- Now check faulty app belongs to which deployment

5- Now check which replicaset (if any) is party of that deployment

6- Then check which pods are part of that replicaset

7- Then check which services are part of that namespace

8- Then check which service correspond to the deployment where our faulty application is 

9- Then make sure label selectors in deployment to pod template are correct

10- Then ensure label selector in service to deployment are correct.

11- Then check that servicename if referred in any deployment is correct. For example, webserver pod is referring to database host (which will be the servicename of database) in env of pod template is correct.

12- Then check that ports are correct in clusterIP or nodeport services. 

13- Check if the status of pod is running

14- check logs of pods and containers

I hope that helps and feel free to add any step or thought in the comments. Thanks.

Categories: DBA Blogs

Different Ways to Access Oracle Cloud Infrastructure

Thu, 2020-08-06 09:00

This is a quick jot down of different ways you can access the ever-improving Oracle Cloud Infrastructure (OCI). Most types of Oracle Cloud Infrastructure resources have a unique, Oracle-assigned identifier called an Oracle Cloud ID (OCID).

You can access Oracle Cloud Infrastructure using the Console (a browser-based interface) or the REST API. To access the Console, you must use a supported browser. You can go to the sign-in page. You will be prompted to enter your cloud tenant, your user name, and your password. The Oracle Cloud Infrastructure APIs are typical REST APIs that use HTTPS requests and responses.

All Oracle Cloud Infrastructure API requests must be signed for authentication purposes. All Oracle Cloud Infrastructure API requests must support HTTPS and SSL protocol TLS 1.2. Oracle Cloud Infrastructure provides a number of Software Development Kits (SDKs) and a Command Line Interface (CLI) to facilitate development of custom solutions.

Software Development Kits (SDKs) Build and deploy apps that integrate with Oracle Cloud Infrastructure services. Each SDK provides the tools you need to develop an app, including code samples and documentation to create, test, and troubleshoot. In addition, if you want to contribute to the development of the SDKs, they are all open source and available on GitHub.

  • SDK for Java
  • SDK for Python
  • SDK for TypeScript and JavaScript
  • SDK for .NET
  • SDK for Go
  • SDK for Ruby

Command Line Interface (CLI) The CLI provides the same core capabilities as the Oracle Cloud Infrastructure Console and provides additional commands that can extend the Console's functionality. The CLI is convenient for developers or anyone who prefers the command line to a GUI.

Categories: DBA Blogs

Oracle 11g on AWS RDS Will Be Force Upgraded in Coming Months

Thu, 2020-08-06 00:51
To make a long story short: If you have Oracle 11g running on AWS RDS, then start thinking, planning, and implementing it's upgrade to a later version, preferably Oracle 19c. 

This is what AWS has to say about this:

Oracle has announced the end date of support for Oracle Database version 11.2.0.4 as December 31, 2020, after which Oracle Support will no longer release Critical Patch Updates for this database version. Amazon RDS for Oracle will end support for Oracle Database version 11.2.0.4 Standard Edition 1 (SE1) for License Included (LI) model on October 31, 2020. For the Bring Your Own License (BYOL) model, Amazon RDS for Oracle will end the support for Oracle Database version 11.2.0.4 for all editions on December 31, 2020. All 11.2.0.4 SE1 LI instances will be automatically upgraded to 19c starting on November 1, 2020. Likewise, the 11.2.0.4 BYOL instances will be automatically upgraded to 19c starting on January 1, 2021. We highly recommend you upgrade your existing Amazon RDS for Oracle 11.2.0.4 DB instances and validate your applications before the automatic upgrades begin. 

The bit which probably would apply to most of enterprise customers who are running Oracle 11g with BYOL license is this:

January 1, 2021Amazon RDS for Oracle starts automatic upgrades of DB instances restored from snapshots to 19c
Instead of leaving to the last minute, its better to upgrade it sooner. There are lots of things which need to be taken into consideration for this upgrade within and outside of the database. If you need any hand with that, feel free to reach out.
Categories: DBA Blogs

Pages