PTP | Cloud Experts | Biotech Enablers https://ptp.cloud/ Helping innovative life sciences companies to get treatments to market faster. Tue, 12 Aug 2025 04:23:26 +0000 en-US hourly 1 https://ptp.cloud/wp-content/uploads/2020/11/cropped-ptp-favicon-1-32x32.png PTP | Cloud Experts | Biotech Enablers https://ptp.cloud/ 32 32 245964941 How PTP Helped Device42 Cut Downtime by 93% with AWS Lambda Automation https://ptp.cloud/ptp-automates-image-builder-pipeline-device42/?utm_source=rss&utm_medium=rss&utm_campaign=ptp-automates-image-builder-pipeline-device42 Fri, 04 Apr 2025 20:40:07 +0000 https://ptp.cloud/?p=15537 The post How PTP Helped Device42 Cut Downtime by 93% with AWS Lambda Automation appeared first on PTP | Cloud Experts | Biotech Enablers.

]]>

How PTP Helped Device42 Cut Downtime by 93% with AWS Lambda Automation

Illustration of Goat working on servers leading data to the cloud and to a proved treatment

Device42, a global tech company trusted in over 70 countries, faced growing inefficiencies from a manual image-building pipeline that slowed releases and risked downtime. PTP stepped in to design an automated deployment framework using AWS Lambda, Amazon Machine Images (AMIs), and CloudWatch. The result? A highly scalable, self-healing system that slashed deployment downtime by 93% and recovered 7–10 hours of engineering time monthly—empowering Device42 to scale faster and innovate with confidence.

%

Reduction in downtime

Hours of engineering time saved per month

Used by Organizations

The Challenge

Device42, a technology company trusted by organizations in over 70 countries, met a critical bottleneck in its operational efficiency. Its Image Builder pipeline relied heavily on manual processes for creating, testing, and deploying system images. This labor-intensive approach introduced multiple pain points:

  • Excessive engineering time spent on repetitive manual tasks
  • Increased risk of human error and inconsistent configurations
  • Prolonged and unpredictable deployment cycles
  • Frequent downtime during updates (30–60 minutes per deployment)
  • Delayed feature releases and lack of scalability
  • Hindered ability to meet growing global demand

To maintain its competitive edge and ensure seamless service, Device42 needed to transform this fragile, time-consuming workflow into a resilient, automated pipeline capable of accelerating deployments, minimizing downtime, and delivering consistent, repeatable results across hybrid cloud environments.

The Solution

PTP designed a scalable automation framework to revolutionize the Image Builder pipeline. Key elements included:

AWS Lambda functions as the core orchestration layer

  • Triggered manually for scheduled releases or automatically via CloudWatch alarms during infrastructure issues

Automated pipeline that:

Auto Scaling Groups to manage server capacity dynamically

Load Balancers to optimize traffic distribution

Eliminated downtime and manual scaling efforts

The Outcome

Through PTP’s automation expertise, Device42 now operates a fully automated, cloud-native deployment framework, delivering measurable business benefits:

  • 7–10 hours saved per month in engineering effort
  • ~93% reduction in deployment downtime (from 30–60 minutes down to just 2–4 minutes)
  • Increased release velocity through automation
  • Improved operational resilience and system reliability
  • Consistent infrastructure management across hybrid environments
  • Scalable DevOps foundation to support future innovation
Graphs Isometric Contained Icon

Ready to Eliminate Downtime and Accelerate Deployments?

Partner with PTP to automate your infrastructure and unlock faster, more reliable delivery across hybrid cloud environments. Contact us today to get started.

 Scale Smarter, Not Harder

Let PTP help you modernize your infrastructure and reduce downtime. Schedule a free consultation today!

Homepage Contact Us

The post How PTP Helped Device42 Cut Downtime by 93% with AWS Lambda Automation appeared first on PTP | Cloud Experts | Biotech Enablers.

]]>
15537
The Impact of AWS Lambda’s End of Support for Older Python and Node.js Runtimes: Why Migrating is Critical for Your Cloud Strategy https://ptp.cloud/aws-lambda-python-nodejs-runtime-migration/?utm_source=rss&utm_medium=rss&utm_campaign=aws-lambda-python-nodejs-runtime-migration Sat, 16 Nov 2024 03:30:06 +0000 https://ptp.cloud/?p=13994 The post The Impact of AWS Lambda’s End of Support for Older Python and Node.js Runtimes: Why Migrating is Critical for Your Cloud Strategy appeared first on PTP | Cloud Experts | Biotech Enablers.

]]>

PTP Solves: Migrating AWS Lambda Runtimes for Secure, Compliant Biotech Applications

As biotech and pharmaceutical research organizations increasingly adopt cloud-based solutions to accelerate data processing and analysis, the tools that support these workflows must evolve to meet growing demands for performance, security, and scalability. For many businesses relying on AWS Lambda to run lightweight, event-driven applications, these changes can have a significant impact on operations. In particular, AWS regularly announces the end of support for older Python and Node.js runtimes, which means companies need to be aware of deprecations and have a plan of action.

In this post, we’ll explore the key reasons why migrating away from these outdated Lambda runtimes is crucial and how you can smoothly transition to newer, supported versions to ensure your serverless applications remain reliable, secure, and performant.

What Does AWS Lambda’s End of Support for Older Runtimes Mean?

AWS Lambda allows code to run without the need for provisioning or managing servers, supporting multiple programming languages, including Python, Node.js, Java, and more. Each of these languages has an associated runtime, which includes the programming language and the associated libraries and dependencies Lambda requires to execute the code. However, like any technology, languages evolve, and older versions eventually reach their end of life.

AWS has announced that it will stop supporting several older versions of Python and Node.js in Lambda. This means that Lambda functions running on these runtimes will no longer receive security patches, performance updates, or bug fixes, potentially leaving serverless workloads vulnerable or less efficient.

 

Key Risks of Using Outdated Runtime

1. Security Vulnerabilities

In the biotech and pharmaceutical industries, data security and patient confidentiality are of utmost importance. Once a runtime is deprecated, it no longer receives critical security updates. Research organizations processing sensitive data—whether related to clinical trials, genetic research, or drug discovery—may expose themselves to data breaches and compliance issues by continuing to rely on deprecated runtimes. Security vulnerabilities can lead to unauthorized access, data loss, or damage to research integrity.

2. Decreased Performance and Efficiency

In research environments where large datasets are analyzed and processed frequently, performance is critical. Older runtimes are not optimized for the latest AWS infrastructure, which can result in inefficient execution of Lambda functions. Biotech and pharma organizations that rely on Lambda for time-sensitive applications—such as real-time analytics, data pipelines, or simulations—may experience delays and increased compute costs if their functions are running on outdated runtimes. Migrating to a newer runtime ensures that Lambda functions run with the latest performance improvements, enabling faster processing and more efficient use of cloud resources.

3. Compatibility Issues with New Technologies

The pharmaceutical and biotech sectors often leverage cutting-edge technologies like machine learning, artificial intelligence, and high-performance computing. As new AWS features are released, older runtimes are not updated and, therefore, may not be compatible. This could limit the ability to integrate Lambda functions with emerging technologies and best practices. Updating runtimes ensures seamless integration with new AWS services, providing better support for complex research workflows and data pipelines.

4. Increased Operational Complexity

Biotech and pharmaceutical research organizations must comply with strict regulatory standards, such as 21 CFR Part 11, HIPAA, and GDPR. Operating Lambda functions on unsupported runtimes can create additional complexity, as troubleshooting and patching vulnerabilities will no longer be managed by AWS. Additionally, after a time specified by AWS, organizations will not be able to update or maintain the code in Lambda functions with very out-of-date runtimes. This greatly increases the likelihood of errors, downtime, and regulatory compliance risks. Migrating to a supported runtime simplifies operations and ensures that Lambda functions remain secure and compliant. 

Benefits of Migrating to Supported Runtimes

1. Access to New Language Features and Enhanced Security

Migrating to newer Python and Node.js versions unlocks access to new language features and improvements that can be critical for modern research workflows. Newer releases of Python, for example, have offered improved support for asynchronous programming, which is essential for efficiently processing large amounts of data. Node.js has introduced new features like optional chaining and nullish coalescing in their updates, which enhance the ability to handle complex logic in research applications. Moreover, these newer versions receive regular security patches, which ensures that sensitive research data remains secure.

2. Improved Integration with AWS Services

AWS Lambda functions often serve as a core component of larger research systems that integrate with other AWS services like Amazon S3, DynamoDB, AWS Batch, Sagemaker, HealthOmics, and HealthLake. Newer runtimes are better optimized for these integrations, making it easier to build efficient, scalable research workflows. For example, AWS Step Functions, which is used to coordinate Lambda functions and other AWS services, works more effectively with the latest runtimes, enabling the creation of robust, automated research pipelines.

3. Better Compliance and Regulatory Alignment

In highly regulated industries like pharmaceuticals, maintaining compliance with industry regulations is crucial. Using outdated runtimes can create security and data integrity gaps that may violate compliance requirements. Newer runtimes are supported by AWS’s security framework, ensuring that Lambda functions remain in line with industry regulations and standards, reducing the risk of non-compliance during audits or inspections.

4. Enhanced Performance and Cost Efficiency

In the research space, optimizing the performance of Lambda functions can lead to research acceleration. Newer runtimes are more efficient in terms of execution speed and resource utilization. For example, functions running on these updated runtimes are able to process data faster, which reduces compute costs and time. In biotech and pharmaceutical research, where large volumes of data are processed regularly, these savings can quickly add up.

How to Migrate to Newer Runtimes

1. Evaluate Current Lambda Functions

The first step in migrating is identifying which Lambda functions are still running on outdated runtimes. This can be done by reviewing the AWS Lambda console and checking the runtime settings for each function.

2. Update Code for Compatibility

After identifying the functions to update, assess the codebase for compatibility with the newer runtime versions. This might involve:

  • Updating dependencies to newer versions that are compatible with the most current Python or Node.js version
  • Refactoring code to take advantage of new language features
  • Testing the updated functions to ensure they perform as expected in the new runtime environment

3. Test, Deploy, and Monitor

Testing is crucial to ensure that Lambda functions work correctly after migration. Biotech and pharmaceutical companies can use AWS CloudWatch for logging and monitoring to track the performance of the updated functions. Once testing is complete, the updated functions can be deployed into production.

4. Optimize and Scale

After migrating, organizations should monitor the performance of Lambda functions and look for opportunities to optimize. AWS CloudWatch metrics and AWS X-Ray can help track function execution times, resource usage, and error rates, ensuring the system runs smoothly as research needs scale.

Benefits of Well-Architected Framework Review

AWS Well-Architected Proficient badge with a teal outline, black and white bold lettering, and a black AWS logo on top.

A Well-Architected Framework Review (WAFR) is a valuable process for identifying issues that may exist in an AWS Lambda environment. By conducting a review, organizations can assess their cloud infrastructure against AWS’s best practices across five key pillars: operational excellence, security, reliability, performance efficiency, and cost optimization. As part of this review, an evaluation of the Lambda functions is performed, ensuring Lambda functions are operating within a VPC, using encrypted environmental variables, and following the principle of least privilege. This proactive assessment helps pinpoint areas that may be a security risk or cost liability. A Well-Architected Review also offers recommendations on how to neutralize these issues, ensuring the organization’s environment is aligned with the latest AWS standards and best practices. For more information about conducting a Well-Architected Framework Review with PTP, including options to fully fund the project, fill out the form at the bottom of this page or contact info@ptp.cloud.

Conclusion

The end of support for older Python and Node.js runtimes in AWS Lambda presents a significant challenge for biotech and pharmaceutical research organizations relying on Lambda to power their critical applications. However, migrating to newer runtimes is essential for maintaining security, performance, and regulatory compliance. By updating to supported runtimes, research organizations can improve the efficiency and scalability of their workflows, ensure better data protection, and reduce operational complexities.

Taking proactive steps to migrate to the latest supported versions will ensure that Lambda functions remain secure, cost-effective, and capable of supporting the next generation of scientific breakthroughs.

Request your complimentary WAR today!

reCAPTCHA is required.

The post The Impact of AWS Lambda’s End of Support for Older Python and Node.js Runtimes: Why Migrating is Critical for Your Cloud Strategy appeared first on PTP | Cloud Experts | Biotech Enablers.

]]>
13994
Mastering AI Architecture for Life Sciences: Balancing Top-Down and Bottom-Up Strategies https://ptp.cloud/mastering-ai-architecture-life-sciences/?utm_source=rss&utm_medium=rss&utm_campaign=mastering-ai-architecture-life-sciences Wed, 24 Apr 2024 11:46:26 +0000 https://ptp.cloud/?p=10531 Discover the key aspects of mastering AI architecture in the life sciences industry with expert insights from Aaron Jeskey, Senior Cloud Architect at PTP. This blog post explores the innovative approaches and strategies used to leverage AI for scientific advancements.

The post Mastering AI Architecture for Life Sciences: Balancing Top-Down and Bottom-Up Strategies appeared first on PTP | Cloud Experts | Biotech Enablers.

]]>

Artificial Intelligence (AI) and Machine Learning (ML) are no longer future goals for life sciences—they’re current necessities. In this session, PTP joined John Conway, Chief Visioneer Officer of 20/15 Visioneers, to discuss the practical realities of implementing AI in research environments. Titled “The Rational AI Architect,” this webinar explores how biotech companies and research teams can lay the groundwork for successful AI projects.

The AI “Gold Rush” in Life Sciences

In 2023 and 2024, nearly every technology roadmap has been updated to include generative AI, LLMs, or predictive modeling. But for life sciences companies—especially those still modernizing infrastructure—AI can be more aspiration than reality. Cloud data migration and governance must come first, and the foundation must be secure, scalable, and compliant with life sciences standards like GxP and HIPAA.

PTP’s team has helped companies design for long-term success by aligning IT infrastructure for AI workloads, assessing readiness, and educating internal teams. This blog summarizes key takeaways from the conversation with John Conway and Aaron Jeskey, Senior Cloud Architect at PTP.


Where AI Initiatives Begin—and Why It Matters

AI and ML requests in life sciences organizations typically originate in one of two ways:

Bottom-Up: From Researchers to Leadership

Requests often come from bench scientists, data analysts, or bioinformaticians who see the potential of AI in their workflows. While technically promising, these initiatives struggle without organizational support. Challenges include:

  • Lack of platform consistency

  • Poorly defined processes for infrastructure management

  • Limited buy-in from senior leadership

  • Fragmented tools and shadow IT

To scale successfully, research IT teams need alignment from leadership on governance, security, and funding.

Professional profile of Aaron Jeskey, Sr. Cloud Architect at PTP, with a brief summary of his career in AWS solutions and leadership roles.Top-Down: From Leadership to Technical Teams

When executives lead the charge for AI, expectations tend to be high—but often misaligned with technical feasibility. Challenges include:

  • A shortage of qualified cloud data engineers or DevOps resources

  • Unrealistic timelines

  • Missing foundational components (e.g., data orchestration, monitoring, tagging strategies)

  • Security and compliance blind spots

Without sufficient technical resources or internal education, the disconnect between strategy and execution grows quickly.


Managing Expectations and Building Real Platforms

Whether AI demand comes from the C-suite or the lab, the risks are similar: misalignment, wasted investment, and failed pilots. To mitigate this, PTP recommends:

  • Starting with a Well-Architected Framework Review to benchmark infrastructure readiness

  • Implementing cloud governance and cost control early

  • Leveraging scalable platforms like AWS for Life Sciences to support evolving workloads

  • Building internal fluency through education and transparent reporting

In one PTP case study, a genomics company attempted to implement ML without a centralized data repository. After a platform redesign and cloud engineering support, they reduced duplication, improved compliance, and accelerated model training by 30%.


Conclusion: Rational AI Starts with Realistic Infrastructure

Whether requests come top-down or bottom-up, life sciences companies need a unified strategy for implementing AI. That means prioritizing foundational architecture, building in compliance from day one, and maintaining strong communication between research and leadership.

PTP helps clients develop cloud-native AI platforms that are secure, scalable, and optimized for scientific research. From managed IT for labs to multi-region HPC clusters, we help life sciences companies move from AI talk to AI outcomes.

The post Mastering AI Architecture for Life Sciences: Balancing Top-Down and Bottom-Up Strategies appeared first on PTP | Cloud Experts | Biotech Enablers.

]]>
10531
The Rational AI Architect – The Path for AI in Life Sciences https://ptp.cloud/the-rational-ai-architect-the-path-for-ai-in-life-sciences/?utm_source=rss&utm_medium=rss&utm_campaign=the-rational-ai-architect-the-path-for-ai-in-life-sciences Wed, 20 Mar 2024 07:30:10 +0000 https://ptp.cloud/?p=10477 The post The Rational AI Architect – The Path for AI in Life Sciences appeared first on PTP | Cloud Experts | Biotech Enablers.

]]>
 

PTP joins with John Conway of 20/15 Visioneers to discuss the path for AI in Life Sciences.  2023 and 2024 have brought AI into almost every technology conversation.  In most instances, the AI discussion is about “how can we harness the value in the future”?  For drug discovery, most data environments will have to walk before they run.

Download the latest white paper on Scientific Data Management: Best Practices to Achieve R&D Operational Excellence

 

In this webinar PTP and 20/15 Visioneers will discuss:

  • AI – the “it” word for 2024
  • What is “The Rational AI Architect”
  • Best Practices for Early Stage Life Sciences data environments
  • Case studies of PTP getting data “ready”
  • Roadmap for leveraging AI

Artificial Intelligence (AI) is transforming industries across the board, and life sciences is no exception. From drug discovery to gene therapy, AI is enabling significant efficiency gains. However, with AI’s potential comes a need for careful planning and execution. 


The Role of the Rational AI Architect

The concept of the rational AI architect revolves around building AI solutions with a focus on first principles. This approach aims to create robust AI solutions that are not over-engineered or overly complex. It emphasizes understanding the core problem, assessing existing resources, and developing a strategy to build efficient AI systems.

The rational AI architect avoids getting swept up in the hype surrounding AI, focusing on the foundational work required to ensure AI’s success in the life sciences field. This includes data acquisition, management, and security practices, along with considerations for the cultural shift required to adopt AI effectively.


Building a Culture of Data-Driven Decision-Making

A successful AI initiative starts with a culture that views data as an asset. Organizations need to cultivate a mindset where data is seen as a valuable resource that requires proper governance and management. Without this cultural foundation, AI projects are likely to encounter significant obstacles.

The panelists emphasized the importance of having a clear data strategy that encompasses data tagging, contextualization, and versioning. By establishing these practices, life sciences companies can build a foundation that supports AI and machine learning (ML) initiatives. As one panelist mentioned, “data is currency,” and managing it effectively is key to success.


Key Challenges and Solutions

The challenges associated with AI in life sciences can be broadly categorized into cultural, data-related, and security challenges. Here’s a breakdown of some key takeaways from the discussion:

  • Cultural Challenges: Creating a culture that values data and understands the importance of AI requires effort. Organizations should establish incentives to encourage proper data management and compliance.
  • Data Challenges: To achieve AI success, companies must ensure that their data is findable, accessible, interoperable, and reusable (FAIR). This includes developing a scientific data strategy, assessing the health of existing data, and maintaining proper versioning and tagging practices.
  • Security Challenges: Security is a top concern in AI adoption. It’s crucial to ensure that sensitive data is protected, and the right protocols are in place to maintain compliance with industry regulations. Implementing a comprehensive security strategy is a fundamental step.

Establishing an AI Center of Excellence

To ensure successful AI adoption, the panelists recommended establishing an AI Center of Excellence. This approach brings together a team of experts to oversee AI initiatives, ensuring that they align with organizational goals and best practices. The AI Center of Excellence can help:

  • Define the scope of AI projects and set clear objectives.
  • Provide guidance on foundational model selection.
  • Ensure compliance with data governance and security protocols.
  • Facilitate cross-functional collaboration and knowledge sharing.

The Path Forward

AI has the potential to revolutionize life sciences, but it requires a thoughtful approach. Organizations should start small, focus on building a solid data foundation, and establish the right culture to support AI adoption. The rational AI architect approach encourages a stepwise, iterative process that prioritizes data quality, security, and stakeholder collaboration.

 


With AI and ML evolving rapidly, companies need to adapt to changing technologies while staying grounded in proven methodologies. By embracing the rational AI architect approach, life sciences organizations can navigate the complexities of AI and unlock its full potential.

Speakers

The post The Rational AI Architect – The Path for AI in Life Sciences appeared first on PTP | Cloud Experts | Biotech Enablers.

]]>
10477
Case Study – Terraform on AWS for Automating Workflows https://ptp.cloud/case-study-terraform-on-aws-for-automating-workflows/?utm_source=rss&utm_medium=rss&utm_campaign=case-study-terraform-on-aws-for-automating-workflows Fri, 27 Oct 2023 15:39:02 +0000 https://ptp.cloud/?p=7176 In the high-risk domain of biotechnology, PTP partnered with a life sciences client to automate critical data pipelines using AWS, Terraform, and Nextflow, ensuring research validity, cost optimization, and operational efficiency. This collaboration led to the implementation of automated, repeatable, and secure workflows, allowing the client to focus on advancing scientific discovery and improving patient outcomes.

The post Case Study – Terraform on AWS for Automating Workflows appeared first on PTP | Cloud Experts | Biotech Enablers.

]]>

In the fascinating, high risk/high reward domain of biotechnology, research validation and landing the next wave of funding stand as crucial checkpoints in first years of a startup. The 2023 Life Sciences arena has witnessed escalating scrutiny, disturbing reports of manipulated outcomes, and pressure to deliver results faster than ever before. Because of these pressures, building efficient, transparent and reliable data pipelines, in this case with Terraform, has become a top priority for leadership teams across the industry.

In an effort to address the data authenticity priority, PTP’s life sciences client selected Amazon Web Services (AWS) as their strategic consulting partner. From the beginning, the client’s primary goal in their engagement with PTP’s CloudOps and DevOps teams has been to fortify the validity of their research by streamlining and automating a mission critical data pipeline. The leading cloud infrastructure platform for Life Sciences, AWS, Terraform, and Nextflow, a software that is a recognized leader in scientific workflow systems, were chosen as cornerstone technologies to increase their likelihood for success.

After completing an AWS Well Architected Review, PTP had a solid working knowledge of how a couple of the client’s data pipelines had to work. PTP’s lead Solution Architect began using Nextflow to programmatically author a sequence of dependent compute steps tying multiple software apps (Cell Ranger, Seurat, Picard, and Star Aligner) to auto-configured AWS resources including E2, ELB, Auto Scaling, Lambda and Fargate. The integration of software tools and AWS infrastructure was completed, tested, and proven to work. From there EC2 Image Builder and Service Catalogs were set up to produce compute images in a controlled and repeatable manner. This allowed the client’s research teams to independently launch pipelines on AWS compute infrastructure via a pre-built Service Catalog. With this solution in place users can do their research securely online via a few easy-to-follow clicks. Governance of the work that these users perform is managed via AWS security policies with each user given the ability to launch relevant predetermined pipelines through the Service Catalog. The research process was, in a matter of weeks, automated, repeatable, adjustable and fully documented. This joint PTP and AWS solution ensures research validation while accelerating science. The image below demonstrates the overall environment.

Detailed schematic of a Terraform and Service Catalog implementation for automating data workflows in AWS, showcasing multiple stages and integrations.

All PTP builds have been put into Terraform templates to maintain known image files and component lists. Version control is handled by AWS Code Commit. As components change in Terraform, for example a software update to “version 4.2”, Terraform will know the file has changed and will deploy a new version of the component which then creates a new version of the recipe in Image Builder. For AWS cost optimization, which is extremely important when building infrastructure in this manner, the Service Catalog services are tied to Cloudwatch events. When devices go idle a trigger is pulled, then SQS queue and Lambda are used to terminate resources. This automates cost control. This systematic approach was arrived at by thoughtful engineering and has created an operationally efficient environment that can easily scale. The image below represents the Terraform and Service Catalog environment.

Complex diagram illustrating the Terraform and Service Catalog environment for automating workflows in AWS, used by a life sciences client to streamline data pipelines.

Recently, PTP has been working with the client to incorporate Amazon WorkSpaces for SAS access and AWS Managed AD. In the months ahead, PTP’s experience and consultative approach will be invaluable when determining the best options to isolate data and create additional levels of control and security.

PTP and the client’s IT and Informatics teams have implemented solutions that, at first, seemed complex and foreign. Embracing change is never easy, but by doing so the client has taken huge steps to avoid common data related pitfalls that face all Life Sciences companies, and thus has improved the likelihood of finding life changing relief for patients suffering from debilitating diseases.

 

Learn More about PTP’s CloudOps Service for Accelerating Science on AWS

 

Know what you need?  Purchase CloudOps on the AWS Marketplace HERE!

PTP logo with the tagline "Infinite Innovation" on a light gray background, symbolizing the company's rebranding and focus on cloud services.

The post Case Study – Terraform on AWS for Automating Workflows appeared first on PTP | Cloud Experts | Biotech Enablers.

]]>
7176
Bioinformatics Pipeline Automation and Optimization via AWS and PTP https://ptp.cloud/bioinformatics-pipelines/?utm_source=rss&utm_medium=rss&utm_campaign=bioinformatics-pipelines Tue, 19 Sep 2023 19:30:23 +0000 https://ptp.cloud/?p=7128 In this presentation, Scott Scheirey, Scientific Partner Advisor at PTP, addresses common challenges faced by computational biologists in optimizing bioinformatics workflows. He highlights the use of AWS Batch, Nextflow, and Airflow to enhance pipeline efficiency, reliability, and speed. Scheirey explains how these tools can help process large volumes of genomic data more quickly and cost-effectively, ultimately supporting research validation and clinical trials.

The post Bioinformatics Pipeline Automation and Optimization via AWS and PTP appeared first on PTP | Cloud Experts | Biotech Enablers.

]]>

Are your bioinformatics pipelines slow, crashing, or hard to scale? In this video, Scott Schreirey from PTP breaks down how to streamline and optimize bioinformatics workflows using AWS features like Batch, S3, and SageMaker.

Watch the full video on YouTubeYouTube logo for PTP bioinformatics pipelines optimization video

Problem: Is Your Pipeline Inefficient, Slow, or Keeps Crashing?

As a computational biologist, you’re likely working with sequencing platforms like Illumina, PacBio, 10x Genomics, or Vizgen—and your pipelines process massive volumes of data from FastQ, H5AD, or VCF files. But as research scales and instruments evolve, those pipelines can become bottlenecks.

You might have a pipeline that works... most of the time. But it’s slow, or unreliable, or hard to automate. As you approach critical milestones—like funding rounds or clinical trial validation—these inefficiencies cost time and opportunity. Scaling and parallelizing pipelines within AWS can eliminate these challenges.

AWS Features That Accelerate Your Workflows

Nextflow and Airflow are powerful tools for managing workflows, especially when combined with AWS Batch, which automates parallel job processing. These jobs can be triggered automatically when new data is generated, using scalable infrastructure configured with optimized compute instances.

Once processed, data is stored in Amazon S3 in a usable format—whether that’s for visualization or structured formats like JSON used to train machine learning models in SageMaker.

These improvements aren’t just about performance. In many cases, pipeline processing time has been reduced by over 70%, while also decreasing cloud spend—thanks to more efficient automation and job orchestration.

AWS Marketplace logo for CloudOps for Life Sciences Startups

If you’re interested, check out PTP CloudOps for Life Sciences Startups on AWS Marketplace

Need help scaling your genomics pipelines? Learn how our scientific computing IT support helps research teams automate, scale, and accelerate breakthroughs in life sciences.

The post Bioinformatics Pipeline Automation and Optimization via AWS and PTP appeared first on PTP | Cloud Experts | Biotech Enablers.

]]>
7128
Optimizing Scientific Workloads with Benchling https://ptp.cloud/optimizing-scientific-workloads-with-benchling/?utm_source=rss&utm_medium=rss&utm_campaign=optimizing-scientific-workloads-with-benchling Tue, 01 Aug 2023 15:21:25 +0000 https://ptp.cloud/?p=6901 Discover how Benchling and PTP CloudOps can overcome data management challenges in life sciences by standardizing and optimizing scientific workflows.

The post Optimizing Scientific Workloads with Benchling appeared first on PTP | Cloud Experts | Biotech Enablers.

]]>

Addressing Data Management Challenges in Life Sciences with Benchling and PTP CloudOps

Benchling is popular with scientists for its ability to leverage protocols to deliver a standard data format from the lab device and address FAIR data from the source.  However, there can be challenges for informatics because of the proprietary solution.  PTP and our CloudOps team is here to help!

The Challenge of Proprietary Lab Software

In the world of life sciences, data management can be a significant hurdle. Scientists often rely on proprietary software from various lab systems, which can lead to fragmented data management and compatibility issues. These challenges hinder the efficient extraction and processing of data from lab equipment, slowing down research progress.

Benchling: A Unified Solution for Lab Data

Benchling, a popular Electronic Lab Notebook (ELN) solution, addresses these challenges by providing a unified format for lab data. By establishing protocols to extract data from lab equipment, Benchling ensures that data conforms to FAIR (Findable, Accessible, Interoperable, Reusable) standards from the source. This standardized approach streamlines data management, making it easier for scientists to focus on their research.

Challenges with Data Extraction and Informatics

Despite the benefits Benchling offers, some informatics teams face challenges when extracting data for use in other applications or further analysis. These issues can include:

  • Depth of Processing: The level of data processing and rendering may not meet the specific needs of the informatics team, leading to incomplete or suboptimal results.
  • Data Extraction: Obtaining data from Benchling for use in other software or platforms can be complex, especially for custom analyses or integrations.
  • Team Satisfaction: While scientists might find Benchling convenient, the informatics team may struggle with its limitations, affecting overall satisfaction with the data management process.

PTP CloudOps: Your Solution for Scientific Workloads

PTP, an AWS Life Sciences competency partner, offers a solution to these challenges through its Cloud Ops service. The PTP Cloud Ops team excels in automating and optimizing scientific workloads, providing the necessary tools and expertise to streamline data management.

With CloudOps, you can expect:

  • Rapid Responses from Biotech-Focused Cloud Engineers: PTP’s team understands the unique needs of life sciences and offers quick support to address any issues.
  • Automation and Optimization: PTP specializes in automating scientific workflows and optimizing data processing, ensuring that you get the most out of your lab data.
  • Easy Onboarding and Purchase: Cloud Ops is available on the AWS Marketplace, making it easy to purchase and onboard.

Accelerate Your Science with PTP CloudOps

If you’re facing challenges with data extraction, processing, or overall data management, PTP Cloud Ops can help. By streamlining your scientific workloads and providing expert support, Cloud Ops allows your informatics team to work as effectively as your scientists.

Conclusion

Data management is a critical aspect of life sciences research, and proprietary software can complicate the process. Benchling offers a unified solution, but it may not address all the challenges faced by informatics teams. PTP Cloud Ops provides a comprehensive solution to these issues, enabling life sciences organizations to optimize their scientific workloads and accelerate their research. With Cloud Ops, you can improve data management, enhance team satisfaction, and focus on what matters most: advancing scientific discovery.

 

Logo of AWS Marketplace featuring stylized text and an orange Amazon smile.

 

CloudOps Available on the AWS Marketplace!

The post Optimizing Scientific Workloads with Benchling appeared first on PTP | Cloud Experts | Biotech Enablers.

]]>
6901
When to Use AirFlow vs NextFlow for Pipelines https://ptp.cloud/when-to-use-airflow-vs-nextflow-for-pipelines/?utm_source=rss&utm_medium=rss&utm_campaign=when-to-use-airflow-vs-nextflow-for-pipelines Tue, 01 Aug 2023 14:06:48 +0000 https://ptp.cloud/?p=6887 Explore the advantages and practical scenarios for using Apache AirFlow and NextFlow for data pipelines. This article delves into how AirFlow excels in managing complex workflows, orchestrating ETL jobs, and supporting machine learning data preparation on AWS, while also highlighting the capabilities of NextFlow for scientific workflows.

The post When to Use AirFlow vs NextFlow for Pipelines appeared first on PTP | Cloud Experts | Biotech Enablers.

]]>
Airbnb initially developed Apache Airflow as an open-source platform to programmatically schedule, author, and monitor data pipelines and workflows. As it was created in 2014 by Airbnb, the aim was to help manage increasingly complex workflows at the company; hence remained open-sourced from the start. As a platform developed to schedule, author, and monitor workflows programmatically, Airflow provides different features that help define, schedule, create, monitor, and execute data workflow (Grzemski, 2020). In the realm of the major operation of Airflow, workflow constitutes a sequence of tasks that process data to aid the user in building pipelines. Scheduling is a process of controlling, planning, and optimizing when tasks need to be done while authoring workflows using Airflow means writing Python scripts to generate Directed acyclic graphs (DAGs). DAG constitutes the tasks a user whatnot to run, and they are organized in ways that reflect their dependencies and relations; hence a task is a unit of work within the DAG (Grzemski, 2020). Open Airflow is used on AWS in different scenarios, as described in this context.

One scenario that demands the use of Apache Airflow is managing workflows. Alvarez-Parmar and Maisagoni (2021) argue that as an open-source distributed workflow management platform, Airflow allows users to schedule, orchestrate, as well as monitor workflows. While using Airflow, one is able to orchestrate and automate complex data pipelines. Airflow runs on AWS Fargate alongside Amazon Elastic Container Service (ECS) as an orchestrator, implying that the user does not have to provision as well as manage servers. Airflow operates more than a batch processing platform since it allows the user to develop pipelines to process data and run different complex jobs in a distributed and complex manner. In relation to managing workforces, with AWS Fargate, a user can run the core components of Airflow without creating and managing servers. Similarly, one does not need to guess the capacity of the server to run the Airflow cluster or worry about autoscaling groups and bin packing to maximize resource utilization. Therefore, one practical situation requiring a user to run Airflow is when the user requires managing workflows. Alvarez-Parmar and Maisagoni (2021) note that “Managed Workflows is a managed orchestration service for Apache Airflow that makes it easy for data engineers and data scientists to execute data processing workflows on AWS” Airflow helps users orchestrate workflows as well as manage how they are executed without configuring, managing, and scaling the Airflow architecture. Users who run Airflow on AWS should consider Amazon Managed Workflows for Apache Airflow since it helps to set up Airflow, provisioning and autoscaling capacity (storage and compute), keeps Airflow up-to-date, and automates snapshots. Hence, in relation to using Airflow for managing workflows, Airflow provides a reliable and scalable framework for users to orchestrate data workflows. This enables data engineers to transform, extract, and load data from different sources. With Airflow’s operators, integration with various data systems like cloud storage, databases, and data warehouses is made easier.

In a different discussion on the practical use of Airflow to create on-demand or scheduled workflows that process complicated data from different data providers, Oliveira and Raditchkov (2019) argue that it becomes easier to orchestrate big data workflows with Apache Airflow. Large companies that run big data ETL workflows on the AWS work at a scale where most internal end-users are serviced and several concurrent pipelines are also serviced. With the continuous urge to extend and update the big data platform as a way of keeping up with the latest big data processing frameworks, what is required is an efficient architecture that simplifies big data platform management and enhances easy access to the big data applications. Therefore, using Airflow on AWS, centralized platform teams can maintain their big data platform, service various concurrent ETL workflows, and simplify operational tasks needed to attain the process. While managing workflows on AWS, the Airflow system uses Amazon EMR and Genie as open-source technologies. Amazon EMR provides a big data platform where workflow orchestration, execution, and authoring are done, while Genie offers a centralized REST API for effective big data job submission, central configuration management, dynamic job routing, and abstraction of Amazon EMR clusters. In the entire process of workflow management, Airflow offers a platform to enhance job orchestration, allowing the user to programmatically author, schedule, as well as monitor the complex data pipelines (Oliveira and Raditchkov, 2019). The role of Amazon EMR in the process is to offer a managed cluster platform that scales and runs Apache Spark and Hadoop, as well as other big data frameworks. The diagram shown below shows the use of Airflow in AWS to enhance big data workflows;

Diagram illustrating the integration of Apache Airflow and AWS for managing big data workflows. The process involves platform admin engineers registering big data applications like Apache Spark, provisioning Amazon EMR clusters, and using Apache Airflow for workflow authoring, orchestration, and scheduling. Jobs are submitted to Genie via a custom operator and executed on the EMR clusters. Data is stored in Amazon S3, with dynamic routing and cluster management provided by Genie.

Airflow in AWS on big data Workflows Management (Oliveira and Raditchkov, 2019)

Other than using Airflow to support complex workflows, a different practical scenario that demands using Airflow in AWS is coordinating extract, transform, and load jobs. Airflow is used to perform ETL jobs because it works on a concept known as operators, which denote the logical blocks in ETL workflows (Sinha, 2021). To perform an airflow ETL job, a user is required to have an AWS account and Airflow installed in the system. Extract, transform, and load (ETL) job transforms raw data into needful datasets and finally to actionable insight. Anany (2018) argues that an ETL job reads data from various data sources and applies different transformations to the data before writing the results to a target where such data becomes ready for consumption. The ETL job targets and sources are relational databases in Amazon S3, which helps build a data lake in AWS. AWS provides AWS Glue as a service that deploys and authors ETL jobs. AWS Glue “is a fully managed extract, transform, and load service that makes it easy for customers to prepare and load their data for analytics.”

Despite the case, other AWS services that manage and implement ETL jobs entail the AWS Data Migration Service (DMS), Amazon Athena, and Amazon EMR. Therefore, in a practical scenario where one needs to orchestrate ETL jobs and workflow that entails different ETL technologies, with Airflow in AWS Glue and AWS DMS, the user will easily chain ETL jobs. A good scenario that would require the use of orchestration of ETL jobs using Airflow in AWS is when a business user wants to answer questions related to different datasets. In this case, if a user wants to find the correlation between forecasted sales revenue and online user engagement metrics like mobile users, website visits, or desktop users, the user will follow different ETL workflow steps, including the process of the sales dataset (PSD), Processing of the Marketing dataset (PMD), and Joining the marketing and Sales datasets (JMSD) (Anany, 2018). After doing so, the user will implement the ETL workflow using AWS Glue by chaining ETL jobs using job triggers. The diagram shown below demonstrates how the user will manage ETL workflow with AWS Glue based on the case scenario;

Diagram depicting the management of an ETL workflow using AWS Glue, showcasing the process of chaining ETL jobs for data integration and transformation.

Using AWS Glue to solve the case scenario (Anany, 2018)

Despite the case, the following ETL architecture demonstrates how running Airflow on AWS to coordinate, extract, transform, and load jobs.

Diagram illustrating the coordination of ESL (Extract, Load, and Store) jobs using Apache Airflow on AWS, detailing the integration with various AWS services for data processing.

Running Airflow on AWS to coordinate ESL Jobs (Anany, 2018)

Finally, a different practical scenario that would necessitate a user to run Airflow on AWS is when the user wants to prepare and manage machine learning data and workflows. Cantu (2023) argues that Airflow efficiently helps manage machine learning workflows since it allows machine learning engineers and data scientists to schedule and define tasks for model training, data pre-processing, evaluation, and deployment. Hence, the ability of Airflow to schedule tasks and handle dependencies in a distributed framework positively impacts the management of the end-to-end lifecycle of the machine learning models. Machine learning workflows automate and orchestrate ML tasks’ sequences by allowing data transformation and collection. Training, evaluating, and testing the ML model to attain the intended outcome follows. Most customers use Airflow to perform scheduling, authoring, as well as monitoring of multi-stake workflows. AWS automates Amazon SageMaker tasks in an end-to-end workflow. Thus, one can automate their publishing datasets to Amazon S3, train ML model on their data, and deploy their model for prediction. Therefore, by running Airflow on AWS, one can easily “prepare data in AWS Glue before they train a model on Amazon SageMaker and then deploy the model to the production environment to make inference calls” (Thallam and Dominguez, 2019). Automating and orchestrating the tasks across several services makes it easier to build reproducible and repeatable Machine Learning workflows that can be shared among data scientists and engineers. When Airflow runs on AWS, preparation, and management of machine learning data and workflows are effectively done because AWS Step Functions monitors Amazon SageMaker to ensure that the jobs succeed. AWS uses different structures like built-in-error handling, state management, visual console, and parameter passing that monitor the user’s ML workflows to enhance success.

References

Alvarez-Parmar, R. and Maisagoni, C. (2021). Running Airflow on AWS Fargate. Amazon AWS. https://aws.amazon.com/blogs/containers/running-airflow-on-aws-fargate/

Anany, M. (2018). Orchestrate multiple ETL jobs using AWS Step Functions and AWS Lambda. Amazon AWS. https://aws.amazon.com/blogs/big-data/orchestrate-multiple-etl-jobs-using-aws-step-functions-and-aws-lambda/

Cantu, J. (2023). Mastering Workflow Management and Orchestration with Apache Airflow. Medium. https://medium.com/@jesus.cantu217/apache-airflow-a-comprehensive-guide-to-workflow-management-and-orchestration-bf1372e11920#:~:text=It%20provides%20a%20scalable%20and,cloud%20storage%2C%20and%20data%20warehouses.

Cantu, J. (2023). Mastering Workflow Management and Orchestration with Apache Airflow. Medium. https://medium.com/@jesus.cantu217/apache-airflow-a-comprehensive-guide-to-workflow-management-and-orchestration-bf1372e11920#:~:text=It%20provides%20a%20scalable%20and,cloud%20storage%2C%20and%20data%20warehouses.

Grzemski, S. (2020). Highly available Airflow cluster in Amazon AWS. Getindata. https://getindata.com/blog/highly-available-airflow-amazon-aws/

Oliveira, F. and Raditchkov, J. (2019). Orchestrate big data workflows with Apache Airflow, Genie, and Amazon EMR: Part 1. Amazon AWS https://aws.amazon.com/blogs/big-data/orchestrate-big-data-workflows-with-apache-airflow-genie-and-amazon-emr-part-1/

Sinha, v. (2021). Understanding Airflow ETL: 2 Easy Methods. Hevo https://hevodata.com/learn/airflow-etl-guide/#etljob

Thallam, R. and Dominguez, M. (2019). Build end-to-end machine learning workflows with Amazon SageMaker and Apache Airflow. Amazon AWS. https://aws.amazon.com/blogs/machine-learning/build-end-to-end-machine-learning-workflows-with-amazon-sagemaker-and-apache-airflow/

The post When to Use AirFlow vs NextFlow for Pipelines appeared first on PTP | Cloud Experts | Biotech Enablers.

]]>
6887
HPC in Life Sciences: How PTP Optimizes AWS for Biotech Research https://ptp.cloud/hpc-in-life-sciences-with-aws-and-ptp/?utm_source=rss&utm_medium=rss&utm_campaign=hpc-in-life-sciences-with-aws-and-ptp Wed, 09 Nov 2022 20:47:24 +0000 https://ptp.cloud/?p=6474 PTP Cloud Architect Aaron Jeskey shares how life sciences teams can optimize high-performance computing (HPC) environments on AWS for security, speed, and compliance.

The post HPC in Life Sciences: How PTP Optimizes AWS for Biotech Research appeared first on PTP | Cloud Experts | Biotech Enablers.

]]>

High-performance computing (HPC) is no longer a luxury—it’s a necessity in life sciences. From genomics to simulations, today’s research environments demand cloud-native performance, security, and flexibility. In this episode of the CloudOps Podcast, AWS Life Sciences Partner PTP shares how their team builds, secures, and scales HPC environments with AWS-native tools, research-aligned architecture, and hands-on support.

PTP Senior Cloud Architect Aaron Jeskey joins Jon Myer to explain how biotech and clinical research teams are optimizing AWS infrastructure for performance, security, and compliance. Drawing from years of hands-on experience in scientific computing and managed IT services for life sciences, Aaron walks through real examples where PTP helped customers transform underperforming HPC environments into scalable, compliant platforms for innovation.

🧠 From Cluster Chaos to Cost Control

Many early-stage biotech organizations inherit legacy workflows and ad hoc cloud setups. PTP starts by performing a Well-Architected Review to identify risks, cost inefficiencies, and performance bottlenecks. This step is crucial in optimizing research IT infrastructure without disrupting scientific operations.

🔒 Built-In Security for Life Sciences Compliance

Data protection is top of mind for life sciences companies dealing with HIPAA, GxP, and GDPR. PTP helps clients implement Transit Gateway, VPN, IAM policies, and encrypted storage for full-stack security—backed by tools like its Security Risk Assessment for regulated environments.

🚀 CloudOps Designed for Scientists

Whether supporting genomics pipelines or simulation workloads, PTP focuses on aligning infrastructure with research goals. They offer integration with lab tools like ELNs and LIMS, and provide cloud migration support that reduces friction and accelerates time to science.

🎯 Enabling Autonomy Through Training

Unlike many providers, PTP empowers customers through training and direct access—not vendor lock-in. Their cloud engineering team supports organizations with hands-on education and training videos that build internal expertise.

🎙️ Join the Conversation

PTP regularly shares insights at events like AWS re:Invent and Bio-IT World. This conversation offers a valuable glimpse into how top AWS partners solve critical challenges for life sciences.

🔎 Transcript Highlights: HPC in Life Sciences with AWS and PTP

00:10 – PTP’s straightforward and planning-driven approach to HPC cloud architecture

00:50 – Right-sizing AWS clusters to improve life sciences IT performance

01:12 – Leveraging AWS-native HPC tools for life sciences research pipelines

02:05 – Introducing Aaron Jeskey and his HPC and life sciences expertise

04:36 – Clarifying high-performance computing (HPC) vs traditional clustering for bioinformatics workflows

06:01 – Challenges in HPC infrastructure for early-stage biotech companies

07:00 – Conducting AWS Well-Architected Reviews for research IT optimization

09:02 – Cost control by identifying waste in long-running AWS compute clusters

11:03 – Empowering biotech teams through training instead of vendor lock-in

13:30 – Real-world client story: migrating scientific software from laptop to AWS WorkSpaces

15:27 – Optimizing AWS GPU usage for HPC in life sciences via spot instances and AZ scaling

17:00 – Security improvements: VPN, Transit Gateway, IAM policy enforcement in regulated environments

20:00 – Building long-term research partnerships through cloud enablement and strategy

24:00 – Why hands-on cloud architecture expertise matters for biotech HPC success

33:00 – Trends in instrument data migration, IoT pipelines, and AWS-native scientific analysis

Ready to optimize your life sciences HPC pipeline?
Get a free cloud assessment to uncover inefficiencies, reduce spend, and align your cloud architecture with research goals.

The post HPC in Life Sciences: How PTP Optimizes AWS for Biotech Research appeared first on PTP | Cloud Experts | Biotech Enablers.

]]>
6474
Leveraging AWS and DevOps to Improve Security Tools and Customer Outcomes https://ptp.cloud/leveraging-aws-and-devops-to-improve-security-tools-and-customer-outcomes/?utm_source=rss&utm_medium=rss&utm_campaign=leveraging-aws-and-devops-to-improve-security-tools-and-customer-outcomes https://ptp.cloud/leveraging-aws-and-devops-to-improve-security-tools-and-customer-outcomes/#respond Tue, 16 Jul 2019 16:31:21 +0000 https://ptp.cloud/ptp/?p=2410 The post Leveraging AWS and DevOps to Improve Security Tools and Customer Outcomes appeared first on PTP | Cloud Experts | Biotech Enablers.

]]>

One of the numerous advantages we have within the PTP PeakPlus™ team is our ability to customize, automate, and expand security services that, in some cases, even the security vendor is unable to provide.

Our customer needed a replacement for Cisco Cloud Web Security (CWS) which they used for web proxying, URL filtering (keeping users from going to the wrong sites), and user internet usage reports (finding people that go to the wrong sites). The obvious and recommended choice from Cisco is the awesome Umbrella solution. It provides everything the customer needed as well as enhanced DNS and web security across their organization. The only issue was that for compliance the customer needed to be able to provide 90 days of log retention vs the standard 30 day limit the Umbrella product provides. Cisco does provide the ability to store the logs off the system, but this would mean the customer would have to be able to sift through raw logs then take that data and put it into some sort of readable format. This all equals a NO GO from the customer point of view.

So, the customer asked, “Can we fix it?” PTP answered, “Yes, we can!” (Is this a Bob the Builder reference?)

In order to meet the expectations of the customer, PTP provided a single, easy-to-use platform for user internet reporting which enabled search functions with adjustable date ranges, and easily digestible reporting clearly displaying user names, websites visited, and time/date.

Now if you want to get a little more into the weeds…

To get the user internet usage logs out of Umbrella we leveraged our strategic partner, AWS, and their S3 service. A new storage bucket and access policy were created in AWS to allow Umbrella to store compressed proxy and DNS logs in our corporate S3 buckets. Data lifecycle policies (within AWS S3) could then be deployed to maintain data for the required 90 days. In this format the data is still not searchable or useful to the customer.

Next, the user internet usage log data is moved out of AWS S3 and into AWS RDS to provide the required search functions needed for displaying the data in a readable format. The Security Services team created Lambda functions in AWS to move the data into RDS. These custom functions were set up with triggers allowing them to be run every time a new file added to AWS S3 bucket.

Custom scripts then decompress the data, normalize it, de-duplicate it, and index it into an AWS RDS instance running MariaDB. Millions of rows of data are currently being processed by these Lambda functions every day for this customer.

The final piece provides a place to search, view and print reports of the data from our PeakPlus View customer portal. This was achieved through new reports and integrations created to fetch and display internet usage data.

The customer is now able use the native functions of Cisco Umbrella while enjoying enhanced functionality due to the PTP Security Services team’s ingenuity and tremendous work by PTP’s engineering team.

Secure cloud automation helps meet compliance where native tools fall short.

PTP extends AWS and Cisco capabilities to meet your exact needs with DevOps-driven security solutions.

The post Leveraging AWS and DevOps to Improve Security Tools and Customer Outcomes appeared first on PTP | Cloud Experts | Biotech Enablers.

]]>
https://ptp.cloud/leveraging-aws-and-devops-to-improve-security-tools-and-customer-outcomes/feed/ 0 2410