SRE Digest https://sredigest.com SRE, ITOps, Observability Wed, 27 Nov 2024 08:10:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://i0.wp.com/sredigest.com/wp-content/uploads/2024/02/SREJOURNAL_LOGO.png?fit=32%2C32&ssl=1 SRE Digest https://sredigest.com 32 32 228886295 Installing Microsoft SQL in Ubuntu 24.04.1 LTS https://sredigest.com/2024/11/27/installing-microsoft-sql-in-ubuntu-24-04-1-lts/ Wed, 27 Nov 2024 08:09:24 +0000 https://sredigest.com/?p=7638 Microsoft SQL Server is not officially supported on Ubuntu 24.04.1 LTS at this time. According to the search results, the current officially supported versions for SQL Server on Linux are Ubuntu 20.04 and 22.04 LTS.

I found a response in Microsoft forum. But one of the commands given in the response failed for me. Following are the steps worked for the installation.

Install MSSQL Server 2022 on Ubuntu 24.04

curl -fsSL https://packages.microsoft.com/keys/microsoft.asc | sudo gpg --dearmor -o /usr/share/keyrings/microsoft-prod.gpg

curl https://packages.microsoft.com/keys/microsoft.asc | sudo tee /etc/apt/trusted.gpg.d/microsoft.asc

curl -fsSL https://packages.microsoft.com/config/ubuntu/22.04/mssql-server-2022.list | sudo tee /etc/apt/sources.list.d/mssql-server-2022.list

Need to download & install libldap-2.5

curl -OL http://archive.ubuntu.com/ubuntu/pool/main/o/openldap/libldap-2.5-0_2.5.18+dfsg-0ubuntu0.22.04.2_amd64.deb

sudo apt-get install ./libldap-2.5-0_2.5.18+dfsg-0ubuntu0.22.04.2_amd64.deb

Install MS SQL Server

sudo apt-get install -y mssql-server

Configure MS SQL installation

sudo /opt/mssql/bin/mssql-conf setup

Post installation steps

systemctl status mssql-server --no-pager
sudo ufw allow 1433/tcp
sudo ufw enable
sudo ufw status
]]>
7638
Setting Up an ISO Repository in XenServer: A Step-by-Step Guide https://sredigest.com/2024/10/10/setting-up-an-iso-repository-in-xenserver-a-step-by-step-guide/ Wed, 09 Oct 2024 17:00:00 +0000 https://sredigest.com/?p=7609 In this blog post, we’ll explore how to create a local ISO repository in XenServer, download ISO images, and verify the setup. This process is essential for managing virtual machines effectively, allowing you to install operating systems directly from ISO files.

Step 1: Create a Directory for ISO Files

The first command you will execute is:

mkdir /var/opt/iso

This command creates a new directory named iso under /var/opt. This directory will serve as the storage location for your ISO files. Organizing your files in specific directories helps maintain a clean and manageable file system.

Step 2: Create an ISO Storage Repository

Next, you need to create an ISO storage repository (SR) using the following command:

xe sr-create name-label=ISO_IMAGES_LOCAL type=iso device-config:location=/var/opt/iso device-config:legacy_mode=true content-type=iso
  • xe sr-create: This command initializes the creation of a new storage repository.
  • name-label=ISO_IMAGES_LOCAL: This is the name you assign to your SR, making it easier to identify later.
  • type=iso: Specifies that this repository will store ISO images.
  • device-config:location=/var/opt/iso: Points to the directory where your ISO files will be stored.
  • device-config:legacy_mode=true: Enables legacy mode, which may be necessary for compatibility with certain ISOs.
  • content-type=iso: Defines the type of content stored in this repository.

Upon successful execution, the command will return a UUID (Universally Unique Identifier) for the newly created storage repository.

Step 3: Navigate to the ISO Directory

Now that your directory and storage repository are set up, navigate to the directory you created:

bashcd /var/opt/iso

This command changes your current working directory to /var/opt/iso, where you’ll download your ISO files.

Step 4: Download an ISO File

To download an ISO image, use the wget command:

wget --no-check-certificate https://download.nus.edu.sg/mirror/ubuntu-releases/24.04.1/ubuntu-24.04.1-desktop-amd64.iso
  • wget: A command-line utility used to download files from the web.
  • --no-check-certificate: This option tells wget to ignore SSL certificate verification, which can be useful if you’re downloading from a site with self-signed certificates.
  • The URL points directly to the Ubuntu 24.04.1 desktop ISO file.

After executing this command, you should see the downloaded file in your /var/opt/iso directory.

Step 5: List Files in the Directory

To verify that your ISO files are present, use:

ls /var/opt/iso

This command lists all files in the specified directory. You should see your downloaded Ubuntu ISO alongside any other ISOs you may have added.

Step 6: List Storage Repositories

Finally, check that your storage repositories are correctly configured by running:

xe sr-list

This command displays a list of all storage repositories available on your XenServer. You should see your newly created ISO_IMAGES_LOCAL repository listed among others, confirming that it’s ready for use.

Visual Confirmation with XenCenter

Further illustrate this process, here are two screenshots from XenCenter:

LocalISO Pool Screenshot:
LocalISO Pool Screenshot
This screenshot shows the LocalISO pool in XenCenter, confirming that our storage repository has been successfully created.

Installation Media Popup Screenshot:
Installation Media Popup Screenshot
This screenshot displays the Installation Media popup in XenCenter, listing the available ISO files including ubuntu-24.04.1-desktop-amd64.iso. This confirms that our downloaded ISOs are recognized by XenCenter.

By following these steps, you’ve successfully set up a local ISO repository in XenServer, downloaded an operating system image, and verified both the repository and its contents. This setup is crucial for creating virtual machines and managing installations efficiently. Whether you’re deploying Linux distributions or other operating systems, having a well-organized storage strategy enhances your virtualization experience.

Taking a side track, we provide a comprehensive solution for monitoring and managing Citrix Hypervisor environments through eG Enterprise Suite platform. Feel free to reach out if you have any questions or need further assistance with XenServer!

]]>
7609
Optimize your AWS costs with Cloud Cost Recommendations https://sredigest.com/2024/10/03/optimize-your-aws-costs-with-cloud-cost-recommendations/ Wed, 02 Oct 2024 17:00:00 +0000 https://sredigest.com/?p=7618 Managing your AWS costs is both crucial and complex, and as your AWS environment grows, it becomes harder to know where you can optimize and how to execute the necessary changes. Datadog Cloud Cost Management provides invaluable visibility into your cloud spend that enables you to explore costs and investigate trends that impact your cloud bill. And now, Cloud Cost Recommendations automatically identifies the optimizations available to you and makes it easy to execute changes that realize cost savings and help you better manage your cloud spend.

In this post, we’ll show you how Cloud Cost Recommendations helps you identify and optimize AWS cost inefficiencies resulting from legacy services, unused resources, and over-provisioned workloads.

Migrate to current-generation AWS services

The current generation of many AWS services delivers superior performance and improved cost-efficiency compared to legacy services such as previous-generation EC2 instance types and EBS volume types. Migrating to current generation services is often a productive step towards cost optimization, but it can be challenging to identify all of the workloads in your environment that rely on the older versions. Cloud Cost Recommendations automatically detects legacy services in your environment—like previous-generation instances and volumes—and gives you customized guidance for migrating to the current generation.

Let’s say you wanted to look for opportunities to optimize your EBS costs. Cloud Cost Recommendations can show you all the EBS volumes you’re using that are based on previous-generation volume types—such as io1 and gp2, which are less performant and cost-efficient than the gp3 volume type. You can easily filter your cost recommendations to list the gp2 volumes you’re using—including each volume’s ID and tags, as well as the amount of savings you could potentially realize if you migrate it to gp3. In the screenshot below, a single gp2 volume is selected, and the side panel shows the current cost of this volume, a description of the optimization opportunity, and a customized command you can execute from the AWS CLI to migrate the volume from gp2 to gp3.

The suggested migration offers a simple, one-time optimization, but even after you’ve implemented a recommendation like this, unwanted resources could turn up again in your environment. For example, a team may launch a new gp2 volume instead of gp3, unaware of the cost implications. To help you make a lasting reduction in your AWS spending, Datadog makes it easy to surface cloud cost data and recommendations on the dashboards your teams are already using. You can export data from the Cloud Cost Analytics page or easily deploy the out-of-the-box Cost Optimization Opportunities Powerpack, which groups key cloud cost widgets and displays cost recommendations for EBS, EC2, S3, DynamoDB, and other AWS services.

This Powerpack makes it easy to add cost recommendations to the out-of-the-box dashboards your teams already use to visualize AWS performance and opportunities for savings. When you put cost data in their path, teams can adopt a culture of cost awareness and make application and infrastructure decisions informed by cost-efficiency metrics. You can also use the Powerpack to complement dashboards that show changes in your AWS costs, enabling organization-wide cost reporting and illustrating trends in your cloud spend and optimization efforts.

Find and delete unused resources

As your applications evolve and your teams deploy new features, the growing complexity of your expanding environment can make it challenging to keep tabs on all of the AWS resources you use. As a result, you might end up with unused resources—such as ELBs, Amazon MQ brokers, and RDS tables—that still contribute to your cloud costs without adding value to your business. Even teams that have formal and detailed decommissioning procedures are at risk of leaving unused resources in their environment, leading to wasted cloud spend.

Cloud Cost Recommendations can automatically identify unused AWS resources and surface details to give you the visibility you need to avoid unnecessary costs. In the screenshot below, the Cloud Cost Recommendations page lists unused RDS instances. The side panel shows detailed data from an instance that has no connections and hasn’t been active for over 60 days. It also shows a customized aws rds CLI command you can use to easily delete the unused instance and realize a cost savings of $1,800 per month.

To achieve ongoing cost optimizations, you can use Datadog App Builder to accelerate fixes like this in the future. App Builder lets you create custom functionality that you can use to easily manage your AWS environment from within the Datadog platform—such as deleting an unused resource. Your apps can help you scale your cost-efficiency initiatives by enabling team members to easily execute AWS actions when Cloud Cost Recommendations detects an opportunity for optimization. To get started quickly using App Builder, you can choose from blueprints that define customizable apps, and you can select and combine prebuilt queries from the Actions Catalog to automate management of your AWS resources.

Downsize overprovisioned workloads

To optimize costs, it’s important to avoid allocating more capacity than necessary for the resources in your environment. But it’s challenging to accurately forecast a new application’s requirements, and once that application is running in production, its resource consumption can change unpredictably as the workload evolves. For example, EBS volumes can easily end up with provisioned IOPS beyond what’s required to execute their workload, leading to low resource utilization and inefficient cloud spend.

By tracking the resource utilization in your AWS environment, Datadog can automatically detect inefficiencies like EBS volumes with unused IOPS. In the screenshot below, Cloud Cost Recommendations has identified a volume with overprovisioned IOPS and throughput, and provides quick visibility into the volume’s details and the costs you can save by rightsizing it. With a single click, you can create a Jira issue or a case in Datadog Case Management to assign activities related to the optimization, including reviewing the cost and performance impacts.

You can also see your cost recommendations for each service in the Service Catalog. By grouping potential savings by service, you can easily see cost trends and savings opportunities across all of the AWS products used by a service. And with cost data alongside service performance metrics and ownership information, the Service Catalog enables you to investigate costs and coordinate plans to rightsize the service’s infrastructure and quickly realize savings opportunities.

Elevate your AWS cost management with Datadog

Cloud Cost Recommendations draws on the deep history of cost and performance data from your AWS resources collected at five-second granularity. Now available in public beta, Cloud Cost Recommendations helps you quickly execute AWS cost optimizations and foster a culture of cost awareness that enables your teams to drive continuous cost efficiency. Set up Cloud Cost Management and see the documentation to get started. If you’re not already using Datadog, you can start today with a free 14-day trial.

Originally published in https://www.datadoghq.com/blog/cloud-cost-recommendations-for-aws/ by Kayla Taylor and Ashwin Venkatesan.

]]>
7618
Top 10 cybersecurity misconfigurations and how to avoid them https://sredigest.com/2024/09/25/top-10-cybersecurity-misconfigurations-and-how-to-avoid-them/ Tue, 24 Sep 2024 17:00:00 +0000 https://sredigest.com/?p=7579 Have you ever heard of the 80/20 rule? The 80/20 rule, also known as the Pareto principle, was named after the Italian economist Vilfredo Pareto. It states that 80% of consequences come from 20% of causes. Though only a theory, the 80/20 rule has been empirically observed in numerous facets of business.

Cybersecurity is no exception to the 80/20 rule. Let’s assume that 80% of cyberattacks originate from 20% of negligent security practices. Or conversely, 20% of prudent security practices can prevent 80% of cyberattacks. In both ways, it can be inferred that basic security practices can protect your organization from advanced threats. To illustrate this, let’s take a look at three massive and damaging attacks that evolved as a result of simple misconfigurations.

Capital One data leak incident

In 2019, credit card details of 100 million Capital One Financial Corp. customers were leaked online. The root cause for the data breach was a misconfigured firewall in Capital One’s infrastructure, which allowed a former employee to access the AWS S3 cloud storage server data and post it on GitHub.

Microsoft Power Apps data leak incident

During the COVID-19 pandemic, around 38 million records from various web applications that used Microsoft Power Apps were exposed to the public. The cause for this massive data breach was a misconfiguration in the default software setting of the Microsoft Power Apps platform that required users to manually enable a privacy setting to secure files. Users who did not enable the privacy setting had their files made publicly accessible.

Mirai attack

Since 2016, Mirai malware, known for its self-propagating botnet, has fed on common misconfigurations to exploit IoT devices. Mirai malware conducts a brute-force attack on vulnerable IoT devices using default credentials to gain unauthorized access to the network and build its botnet.

Default credentials are a common misconfiguration, yet they pose a greater threat to cybersecurity than any other type of misconfiguration. If you are a SOC manager, here are the top 10 misconfigurations that you should keep track of.

Top 10 cybersecurity misconfigurations to look out for

The National Security Agency (NSA) and Cybersecurity and Infrastructure Security Agency   (CISA) released a joint cybersecurity advisory (CSA) in 2023 on the most common cybersecurity misconfigurations in large organizations. Consider these as the 20% of misconfigurations that, if not addressed, could cause 80% of cyberattacks in your network.

The following are the misconfigurations identified and published by the NSA and CISA:

1. Default configurations of software and applications

Networking devices and software applications come with factory-set default credentials. It is important to change such default credentials while installing these devices or applications. When the default usernames and passwords remain unchanged, these devices and applications can open backdoors for threat actors to access your network.

2. Improper separation of user/administrator privilege

It is important that you segregate your cluster of admin and user accounts and assess the purpose of an account before granting account privileges. Generally, multiple roles are assigned to a single admin account or service account to permit domain control. But such excessive privileges to a single account can create a ruckus in the network when these accounts are compromised.

3. Insufficient internal network monitoring

Effective network monitoring is achievable only through proper network configuration. Most organizations end up configuring only their hosts, enabling host-based logging for host-based monitoring. Using this method allows you to detect the compromised hosts in the network but not the source of compromise. For thorough internal network monitoring, you should securely configure all devices and applications, including routers, switches, IoT devices, and endpoint security solutions like firewalls and anti-malware solutions that monitor inbound and outbound connections.

4. Lack of network segmentation

A one-size-fits-all approach doesn’t fit an extensive network. Privileges and permissions are not the same throughout a network, so large networks are divided into smaller subnetworks. By dividing your larger network into manageable units, you can easily set up security boundaries within the network and configure unique security controls to each subnetwork.

Without network segmentation, the network exists as a single entity through which attackers can move laterally and escalate attacks, like supply chain attacks and ransomware attacks.

5. Poor patch management

Outdated software and firmware are hotspots for attackers to gain access to your networks. Adding to this, software and firmware are transient in nature and may turn incompatible with your environment over time. Such incompatibilities are like needles in a haystack that can come back to prick you if identified first by attackers.

For instance, Log4Shell, a vulnerability in Apache’s Log4j logging library, was subject to a zero-day exploitation in 2021. The zero-day vulnerability enabled remote code executions by attackers, leading to crypto mining, ransomware attacks, and DDoS attacks on the victim systems.

6. Bypass of system access controls

Storing user credentials enables adversaries to overthrow actual system access controls. Because attackers are now capable of breaking through your authentication systems without actual passwords or security codes. Attacks like brute-force and password spray attacks are carried out using stolen credentials, providing initial access to adversaries to carry out other sophisticated cyberattacks.

7. Weak or misconfigured multi-factor authentication (MFA) methods

The conventional password login process is now being replaced by MFA methods like smart cards and smart tokens. As your organization switches to new methods of authentication, you may forget about adhering to previous password policies. But the hashes for passwords no longer in use still exist and can still be used by threat actors to enter your network.

8. Insufficient access control lists (ACLs) on network shares and services

A network ACL encompasses all permissions associated with a network resource. When you do not configure the ACLs properly for shared network resources, unauthorized personnel might gain access to sensitive data shares, repositories, and administrative data on shared drives.

Threat actors can access your sensitive data shares by using commands, open-source tools, or custom malware. These data shares might contain personally identifiable information (PII); service account and web application credentials; service tickets; and other information relating to your network like network topology, vulnerability scan reports, and threat modelling data. All this information can be exfiltrated to execute a ransomware attack, DDoS attack, or social engineering attack. It is vital that you closely inspect all permissions associated with your network resources.

9. Poor credential hygiene

Poor credential hygiene refers to the use of poor passwords that can be easily cracked, improperly configured MFA, and unprotected storage of passwords. A credential compromise takes place when a clear-text password or a password hash is stolen by adversaries. It is vital to implement eminent password policies that comply with the NIST guidelines by properly configuring MFA, like phishing-resistant MFA, to secure the entry points to your network.

10. Unrestricted code execution

It is important to keep track of all executable files in your network. Attackers are never tired of luring users to click phishing emails to auto-execute malicious scripts and codes in the background. According to the joint advisory from the CISA and NSA, adversaries execute unverified codes in the form of executables, dynamic link libraries (DLLs), HTML applications, and macros (scripts used in office automation documents) to exploit a network after initial access.

You can prevent malicious code executions by enabling system settings that prevent downloads from unverified sources and also restrict program executions by analyzing digital signatures, certificates, and key attributes.

Attackers are continuously looking for ways to exploit the 20% of misconfigurations in your network. To avoid employees taking the bait, your SOC team needs to evaluate and adjust your network to ensure it’s free from the above misconfigurations. Investing in a SIEM solution like ManageEngine Log360 is your first step to defend against the ramifications of cybersecurity misconfigurations.

Here’s how you can secure your network using Log360:

  • Get complete visibility into your network with real-time logging and predefined audit reports.
  • Monitor privileged user activities and lateral movement with predefined correlation rules.
  • Defend against potential threats with real-time alerts.
  • Track anomalous user activities with user and entity behavior analytics.
  • Prevent potential attacks with automated incident response.

Would you like to know more? Then sign up for a personalized demo today.

First published in Top 10 cybersecurity misconfigurations and how to avoid them.

]]>
7579
Creating Local Users in Windows 11: A Command Line Approach https://sredigest.com/2024/03/22/creating-local-users-in-windows-11-a-command-line-approach/ Thu, 21 Mar 2024 21:00:00 +0000 https://sredigest.com/?p=7565 While Microsoft often pushes users towards cloud-based accounts, there are still situations where creating local accounts on your Windows machine is necessary. This could be for managing remote endpoints, maintaining a specific security posture, or simply personal preference.

This blog post will guide you through creating a local user account with administrator privileges using the command line in Windows 11.

Why Use the Command Line?

There are several reasons why you might choose to create a user via the command line:

  • Efficiency: For experienced users, the command line can be a faster way to create multiple accounts.
  • Remote Management: If you’re managing multiple machines remotely, scripting user creation through the command line can be a real time-saver.
  • Customization: The command line offers more granular control over user creation compared to the graphical interface.

Prerequisites:

  • An administrator account on the Windows 11 machine.
  • Basic understanding of command prompt navigation.

Creating the User Account

  • Open an elevated command prompt.
  • Search for “cmd” in the Start menu, right-click on “Command Prompt,” and select “Run as administrator.”
  • Click “Yes” if prompted by User Account Control (UAC).
  • Create the user account. Use the following command, replacing USERNAME with your desired username and PASSWORD with a strong password (enclose the password in quotations if it contains spaces):
net user USERNAME PASSWORD /add
  • Press Enter. A success message will indicate that the user account has been created.

Granting Administrator Privileges

By default, newly created user accounts are standard accounts. To grant administrator privileges to the new user, use the following command:

net localgroup administrators USERNAME /add

Replace USERNAME with the username you just created. Press Enter. Again, a success message will confirm that the user has been added to the Administrators group.

Disable / deactivate a user

Use the following command, replacing USERNAME with the name of the account you want to disable:

net user USERNAME /active:no

Replace USERNAME with the username you want to deactivate.

Activate a user

Similarly use the following command, replacing USERNAME with the name of the account you want to activate:

net user USERNAME /active:yes

Replace USERNAME with the username you want to activate.

Tested for Windows 11/Microsoft Windows [Version 10.0.22631.3235]

]]>
7565
Top 10 Cloud IAM Operations automation for security risk mitigation. https://sredigest.com/2024/03/15/top-10-cloud-iam-operations-automation-for-security-risk-mitigation/ Thu, 14 Mar 2024 21:00:00 +0000 https://sredigest.com/?p=7511 Overview

Identity, authentication and authorization is paramount component for cloud security. Harden Cloud IAM configuration acts as perimeter level security for cloud workload. Navigating through evaluating complexities of IAM policy permission, trust boundary need to be evaluated on periodic bases and based on user/role/service account access behaviour pattern, the security control hardening should be applied.

This blog contains most commons best practice for CIEM based security findings to achieve operational excellence through operations automation at scale.

DISCLAIMER: This document discuss about risk mitigation techniques and not risk elimination methods. In order to eliminate risk of cloud platform identity misconfiguration, Service control policies are the best preventive measures. you can refer respective cloud service provider service control policy document to know more about policy creation to prevent misconfiguration at cloud IAM service. Becaue of Service control policies and its technical limitations it cannot be applied for all operations use cases.

This blog covers solution and use cases for bellow problems

  1. How can be scale CIEM operations for thousands of resources, its permissions, trust boundary configuration and across multi-cloud environment (AWS, Azure and GCP)?
  2. What are the best practice use cases i should start with to achieve operational excellence?
  3. How do i automate CIEM security best practice findings without disrupting agility, innovation and business disruption?

Use cases: Operations best practice and automation workflow

NOTE: Top 10 operations automation use cases are shown as example by using AWS,Azure and GCP terminologies but same is applicable other cloud service providers as well.

1) Identify human user access patterns from AWS IAM access advisor , Azure Access reveiw or similar cloud native tools and apply Least privilege enforcement with permissions boundry or policy hardening.

2) Identify System users (Cloud services like EC2, Container, Cloud function, Azure function etc.) from AWS IAM access advisor , Azure Access review or similar cloud native tools and apply Least privilege enforcement with permissions boundary or policy hardening.

Least privilege automation demo using autobotAI.

3 ) AWS IAM Access analyzer finding Fetcher that evaluates if any external principal is set to “ALL PRINCIPAL” then action should be delete IAM role AND/OR apply least privilege from access advisor data with custom approval flow.

4) Automation workflow from Azure access review for Guest users, All users and Applications. Analyze Azure access review result and apply least privilege to user after Mail, MS Teams, or Slack based approvals.

5) Automation workflow bot that analyze Azure AD GUEST user azure from access review, if anomaly found in user creation and access then disable user after approval and Alert security team on Indication of compromise.

6) Automation workflow bot that analyze Azure AD application access review findings and apply least privilege AND/OR disable application based on approval from application owner (over MS teams, slack or google chat).

7) Automation workflow to identify AWS/Azure/GCP IAM users created but not part of any group. Move identified individual user to default user group with read-only permission attached with default deny permission.

8) Automation workflow that identify cross-cloud access in IAM role / Service account / Azure application and applies least privilege with access advisor / Access review based findings.

9) In order to reduce human errors by terminating business critical workload, create automation workflow that automatically identify users and/or roles, service account with termination permission. apply permission boundary with default deny policy for specific resource.

10) Create workflow that detects IAM access key is exposed finding from security services like AWS Trusted advisor, Azure security center, and automatically disable access keys with human approvals.

NOTE: Above examples are few that we have identified as few of the best practice from compliance framework. Every organization has different risk appetite and such automation workflow bot can be created based on each unique requirement.

How to automate? : Call to Action

Multi-cloud automation requires mutation permissions on cloud resources (e.g. Create, update, delete etc). Its important for risk management to not expose such permissions to external entity.

Step 1: Select automation platform with #zerotrust architecture that do not collect data and not require permission trust relation configuration with external provider.

Step 2: Deploy automation workflow workspace on your centralised cloud automation account and add all cloud account, security tools and communication tools (e.g. Google Chat, MS TEams, Mail etc.) to workspace.

Option 1 – Step 3: Select automation workflow bots that helps automating above given use cases.

Option 2 – Step 3: Create your own automation workflow with Generative AI assistance and No Code.

Step 4: Assign automation workflow bot to AWS, Azure or GCP account.

Step 5: Schedule automation workflow bot to execute in time interval or integrate bot with 3rd party security tool with web hook configuration.

Conclusion:

CIEM, CSPM and any other cloud operations that are Day 2 operations which need to be automated should be automated with the goal of improved security posture. automation use cases should be selected carefully by doing detailed cloud operations process mining. Centralised management of such automation workflow is necessary to implement traceability, human approvals and scalability of workflow deployment. Last but not least, Operations automation workflow should be configured to enforce least privilege permission for automation platform with zero trust based architecture support.

Originally appeared in https://medium.com/@hello_26308/ciem-top-10-operations-automation-use-cases-for-security-risk-mitigation-3dbe986eb30a

]]>
7511
Overview of data analysis https://sredigest.com/2024/03/08/overview-of-data-analysis/ Thu, 07 Mar 2024 21:00:00 +0000 https://sredigest.com/?p=7533 Raw data is like a pile of scattered puzzle pieces. It holds potential stories, but it needs organization first. Data analysis acts as the assembler, cleaning, sorting, and connecting the pieces to form a clear picture. This refined data then becomes the building blocks of compelling narratives, presented in reports that guide crucial decisions. Think of it as transforming numbers into a powerful language that reveals hidden truths and shapes the future.

In today’s data-flooded world, businesses desperately need to wield the power of narratives spun from their own information. That’s where data analysis enters the scene, becoming a crucial weapon for both small and large companies. This explains the ever-growing demand for data analysts who can unlock the true potential of this hidden goldmine.

Instead of relying on gut feeling, data-driven businesses base their decisions on the compelling stories their data reveals. Yet, most companies struggle to extract full value from their information. This is where data analysis steps in, transforming raw numbers into actionable insights. From gauging customer sentiment to predicting trends, it plays a critical role in understanding a business’s impact and navigating the market landscape.

But data analysis isn’t just about crunching numbers and creating fancy charts. It’s a strategic approach with a defined purpose. To truly leverage its power, we need to understand the core components:

  • Descriptive: Painting a clear picture of what’s happening
  • Diagnostic: Delving deeper to uncover the “why” behind the data
  • Predictive: Gazing into the future to forecast trends and outcomes
  • Prescriptive: Recommending the best course of action based on insights
  • Cognitive: Utilizing advanced AI to automate and optimize decision-making

By mastering these components, businesses can transform data from a silent observer into a powerful storyteller, guiding them towards a brighter future.

Descriptive Analytics

Imagine data as a blurry photograph. Descriptive analytics are like the tools that sharpen it, revealing what took place. Using historical data, these techniques extract summaries from complex models, helping stakeholders understand past outcomes.

Think of Key Performance Indicators (KPIs) as the flashlights of this process. They highlight specific objectives, letting you track whether they’re achieved or not. Metrics like Return on Investment (ROI) are universal tools, while others are unique to different industries.

Picture generating a report on your sales and finances. That’s descriptive analytics in action, painting a clear picture of the past for informed decision-making.

Diagnostic Analytics

Diagnostic analytics help answer questions about why events happened. Diagnostic analytics techniques supplement basic descriptive analytics, and they use the findings from descriptive analytics to discover the cause of these events. Then, performance indicators are further investigated to discover why these events improved or became worse. Generally, this process occurs in three steps:

  • Identify anomalies in the data. These anomalies might be unexpected changes in a metric or a particular market.
  • Collect data that’s related to these anomalies.
  • Use statistical techniques to discover relationships and trends that explain these anomalies.

Predictive Analytics

Predictive analytics help answer questions about what will happen in the future. Predictive analytics techniques use historical data to identify trends and determine if they’re likely to recur. Predictive analytical tools provide valuable insight into what might happen in the future. Techniques include a variety of statistical and machine learning techniques such as neural networks, decision trees, and regression.

Prescriptive Analytics

Imagine you want to reach a specific business goal, but the path ahead is shrouded in uncertainty. That’s where prescriptive analytics comes in, shining a light on the best course of action. By leveraging insights from this powerful technique, you can make data-driven decisions, navigating even the foggiest situations with confidence.

Think of it as having a wise advisor at your side. Prescriptive analytics analyzes past decisions and events, employing machine learning to decipher complex patterns within your data. It’s like building a map based on past journeys, highlighting the paths most likely to lead you to your desired destination.

With this valuable knowledge, you can:

  • Estimate the likely outcome of different choices: No more guesswork! Prescriptive analytics calculates the potential impact of each option, empowering you to select the strategy most likely to succeed.
  • Make informed decisions in uncertain times: Even when the future is unclear, data-driven insights from prescriptive analytics provide a solid foundation for confident decision-making.
  • Optimize your strategies for maximum impact: By understanding the potential consequences of your actions, you can fine-tune your approach for the best possible results.

Prescriptive analytics is like having a crystal ball combined with a strategic roadmap. It’s not just about knowing what might happen, but about choosing the optimal path to ensure success. So, if you’re ready to ditch the guesswork and embrace data-driven decision-making, prescriptive analytics is your key to unlocking a brighter future.

Cognitive Analytics

Cognitive analytics takes it a step further, turning knowledge into a self-evolving superpower for your business. Unlike traditional analytics that simply react to data, cognitive analytics actively learns and adapts, like a superpowered sponge soaking up information.

  • Thinking Like a Scientist: It draws inferences from data patterns, using its existing knowledge base to form conclusions. Imagine it as a detective piecing together clues.
  • Self-Learning Loop: These conclusions are then fed back into the knowledge base, making it smarter and more insightful over time. Think of it as a student constantly updating their textbook.
  • Beyond Predictions: While traditional analytics predict what might happen, cognitive analytics goes further. It asks “what if?”, helping you understand how your business might react to changing circumstances.
  • Unstructured Genius: Forget rigid rules, cognitive analytics thrives on unstructured data like conversations and reviews. It uses natural language processing to unlock hidden insights from these untapped sources.

Imagine understanding customer sentiment before it becomes a PR crisis, or predicting equipment failure before it disrupts production. That’s the power of cognitive analytics, equipping you to navigate the unknown with confidence. It’s like having a wise consultant constantly learning and evolving alongside your business, ready to answer your “what if?” questions and guide you towards success.

Let’s understand through an Example

By enabling reporting and data visualizations, a retail business uses descriptive analytics to look at patterns of purchases from previous years to determine what products might be popular next year. The company might also look at supporting data to understand why a particular product was popular and if that trend is continuing, which will help them determine whether to continue stocking that product.

A business might determine that a certain product was popular over a specific timeframe. Then, they can use this analysis to determine whether certain marketing efforts or online social activities contributed to the sales increase.

An underlying facet of data analysis is that a business needs to trust its data. As a practice, the data analysis process will capture data from trusted sources and shape it into something that is consumable, meaningful, and easily understood to help with the decision-making process. Data analysis enables businesses to fully understand their data through data-driven processes and decisions, allowing them to be confident in their decisions.

© ALL RIGHTS RESERVED by author,
Originally appeared at https://swetasarangi.hashnode.dev/overview-of-data-analysis

]]>
7533
Disable console logging in Spring Boot https://sredigest.com/2024/02/29/disable-console-logging-in-spring-boot/ Thu, 29 Feb 2024 15:14:40 +0000 https://sredigest.com/?p=7527 Spring Boot is renowned for its robust and flexible logging options. While offering a convenient console output by default, it can sometimes become overwhelming for operations engineers. This post delves into a handy trick to redirect Spring Boot logs to a designated file, offering greater control and clarity.

When running a Spring Boot application, logs are displayed on the console by default. While this can be helpful for initial testing and debugging, it can quickly become excessive in production environments. Every log message, including informational or even harmless warnings, can trigger unnecessary concern for ops personnel accustomed to associating console output with potential issues (unless explicitly indicating success, completion, etc.).

Furthermore, redirecting console output to files for centralized logging can capture extraneous logs, making it difficult to identify critical messages.

Fortunately, Spring Boot allows you to effortlessly redirect logs to a file, offering a cleaner and more manageable approach. Here’s how:

  1. application.properties: Locate your application’s application.properties file. This file acts as a central configuration hub for various Spring Boot settings.
  2. Configuration Parameters: Add the following lines to your application.properties file:
# Empty value disables console logging
logging.pattern.console=  
# Path to your desired log file
logging.file.name=C:/temp/application.log  
  • logging.pattern.console=: Setting this property to an empty string effectively disables console logging.
  • logging.file.name=: This property specifies the location of your desired log file. In this example, logs will be written to C:/temp/application.log.

This post is written as part of #WriteAPageADay campaign of BlogChatter

]]>
7527
Adding local jars to Maven and Building Spring Boot jar file https://sredigest.com/2024/02/28/adding-local-jars-to-maven-and-building-spring-boot-jar-file/ Wed, 28 Feb 2024 13:51:14 +0000 https://sredigest.com/?p=7519 Most of the libraries are available in Maven repository. Yet there are some third-party APIs are given as standalone jar files. I wrote about this earlier.

Add local jar files to Maven Build Path

Eclipse project dependency and Maven

Those approach will still work fine for Core Java applications. However, the scene is different when you build Spring Boot jars. Though the build is successful, Spring Boot single jar file will not carry the local jar. It will lead to ClassNotFoundException at runtime.

The purpose of this post is to resolve that problem. There are different solutions suggested in the forums. But this is what worked for me today.

  1. Install Maven
  2. Install local jars to Maven repository
  3. Use the jar in pom.xml

Install Maven

Download Maven binary from here. I downloaded the zip file for windows. I extracted to C:\tools\apache-maven-3.9.6

Open a CMD to check if our Maven deployment is working fine. You need to set JAVA_HOME to make Maven working. I point at the OpenJDK at my C drive.

set JAVA_HOME = c:\openjdk17

Issue the following command to get the version of Maven.

Maven home: c:\tools\apache-maven-3.9.6

Install local jar

Installation of maven will create the repository at your user folder.

Maven repository

Let’s install our local jar into Maven repository using the following syntax:

Refer to Guide to installing 3rd party JARs by Maven documentation, following is the syntax to do the same.

mvn install:install-file -Dfile=<path-to-file> -DgroupId=<group-id> -DartifactId=<artifact-id> -Dversion=<version> -Dpackaging=<packaging>

You may insert with maven file pom.xml as well. I haven’t tried.

c:\tools\apache-maven-3.9.6\bin\mvn install:install-file -Dfile=c:\external_libraries\ireasoningsnmp.jar -DgroupId=com.snmp -DartifactId=ireasoning -Dversion=7.2 -Dpackaging=jar

This will create the folder structure as per the group id, artifact id and version

The jar file will be copied inside.

Use the jar in pom.xml

Add this to your dependencies.

<dependency>
	<groupId>com.snmp</groupId>
	<artifactId>ireasoning</artifactId>
	<version>7.2</version>
</dependency>

This will not only help you to run the application from STS, but also helps to add the local jar to your Spring Boot embedded jar.

This post is written as part of #WriteAPageADay campaign of BlogChatter

]]>
7519
Artificial Intelligence for IT Operations – AIOPS https://sredigest.com/2024/02/27/artificial-intelligence-for-it-operations-aiops/ Tue, 27 Feb 2024 13:59:46 +0000 https://sredigest.com/?p=7504 We collect metrics, events, logs and traces (MELT) to attain observability. When we get an event, we assign it to rightful owner, who in turn take a decision to resolve it.

Let’s take a simple operation to understand this. Assume, a server is running at 99% CPU.

  • Metrics collection will identify it
  • Observability platform will send an notification to IT Service Management solution (or directly email to the user)
  • Let’s assume the problem is assigned to a application owner, assuming it is application problem.
  • Application check the observability platform for root cause. It says the CPU utilization is due to high CPU consumption of web server.
  • He restarts the service, and let’s assume the problem is resolved.

AIOps leverages Machine Learning and Artificial Intelligence to automate and optimize such IT Operations. The above example is a simple use case. But in the real time, event correlation, automated or manual root cause analysis are involved to decide a remediation.

An effective AIOps platform may perform the following duties –

  • A holistic AIOps platform may collect the MELT on its own. Rest of the platforms may ingest data from monitoring, log aggregation and security tools.
  • Sending notification about the problems. The problems are based on thresholds. The thresholds may be static or dynamic.
  • Analyze the root cause – See if the problem is due to network/infrastructure/application/others. This is a crucial task. A faulty root cause, may lead to incorrect decision by AIOps tool.
  • Based on the root cause, the AIOps tools are supposed to provide recommendations and automation. How efficiently or accurately an AIOps platform can decide, is continuously a matter of debate. Based on such analysis, it may do certain corrective actions such as adding additional resources, kick-start auto-scaling, restart of the necessary service or reboot the whole server.

Sometimes, I may decide to restart the service. Sometimes, I may decide to kick off auto-scaling to distribute the load. Sometimes, I may decide to reboot the machine. These decisions are based on real-world considerations.. How an AIOps can decide its own decision to resolve this problem?

While AIOps tools are constantly evolving and gaining capabilities, it is debatable whether they are currently designed to fully replace human decision-making in critical situations. But the tool would certainly help to analyze the data pattern, suggesting level 1 remediation task such as restarting the service, kick-start auto-scaling, rebooting the whole machine.

This post is written as part of #WriteAPageADay campaign of BlogChatter

]]>
7504