<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.9.3">Jekyll</generator><link href="https://everythingshouldbevirtual.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://everythingshouldbevirtual.com/" rel="alternate" type="text/html" /><updated>2026-04-09T23:25:05-04:00</updated><id>https://everythingshouldbevirtual.com/feed.xml</id><title type="html">EverythingShouldBeVirtual part of Methodical Cloud</title><subtitle>Exploring and sharing insights on virtualization, cloud technologies, automation, and DevOps. Methodical Cloud is focused on providing content-driven resources such as podcasts, blogs, and educational materials.</subtitle><author><name>Larry Smith Jr.</name></author><entry><title type="html">We’ve Moved: Welcome to Methodical Cloud</title><link href="https://everythingshouldbevirtual.com/update/welcome-to-methodical-cloud/" rel="alternate" type="text/html" title="We’ve Moved: Welcome to Methodical Cloud" /><published>2025-04-28T00:00:00-04:00</published><updated>2025-04-28T00:00:00-04:00</updated><id>https://everythingshouldbevirtual.com/update/welcome-to-methodical-cloud</id><content type="html" xml:base="https://everythingshouldbevirtual.com/update/welcome-to-methodical-cloud/">&lt;p&gt;After many years of ideas, exploration, and community through &lt;strong&gt;Everything Should Be Virtual&lt;/strong&gt;, it’s time for the next chapter.&lt;/p&gt;

&lt;p&gt;We’re proud to announce that our work continues at &lt;strong&gt;Methodical Cloud&lt;/strong&gt; — a new content-driven platform dedicated to clarity in automation, workflow design, and building scalable systems.&lt;/p&gt;

&lt;p&gt;While the name has changed, the mission remains: bringing clarity and purpose to complex infrastructure and automation challenges.&lt;/p&gt;

&lt;p&gt;You can now find all future blog posts, podcasts, and educational resources at:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href=&quot;https://methodicalcloud.com&quot;&gt;Visit Methodical Cloud&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thank you for being part of the journey. We’re just getting started!&lt;/p&gt;</content><author><name>Larry Smith Jr.</name></author><category term="update" /><summary type="html">After many years of ideas, exploration, and community through Everything Should Be Virtual, it’s time for the next chapter.</summary></entry><entry><title type="html">Searching for the Source(s) of Truth in Holistic Automation</title><link href="https://everythingshouldbevirtual.com/automation/it%20management/devops/searching-for-the-source-of-truth-in-holistic-automation/" rel="alternate" type="text/html" title="Searching for the Source(s) of Truth in Holistic Automation" /><published>2024-07-01T00:00:00-04:00</published><updated>2024-07-01T00:00:00-04:00</updated><id>https://everythingshouldbevirtual.com/automation/it%20management/devops/searching-for-the-source-of-truth-in-holistic-automation</id><content type="html" xml:base="https://everythingshouldbevirtual.com/automation/it%20management/devops/searching-for-the-source-of-truth-in-holistic-automation/">&lt;p&gt;The Source of Truth (SoT) concept has become a cornerstone for achieving holistic automation in the ever-evolving IT and network management landscape. As organizations strive to streamline operations, ensure compliance, and enhance security, the quest for an accurate, reliable, and comprehensive SoT becomes paramount. This blog post delves into identifying and establishing a robust SoT and its implications for holistic automation. And you may be shocked to know: It’ll likely not be a Single Source of Truth (SSoT).&lt;/p&gt;

&lt;h2 id=&quot;why-a-source-of-truth-is-crucial&quot;&gt;Why a Source of Truth is Crucial&lt;/h2&gt;

&lt;h3 id=&quot;consistency-and-accuracy&quot;&gt;Consistency and Accuracy&lt;/h3&gt;

&lt;p&gt;Automation relies on precise and consistent data. An SoT ensures that all automated workflows work from the same dataset, reducing the risk of errors caused by discrepancies or outdated information.&lt;/p&gt;

&lt;h3 id=&quot;enhanced-security&quot;&gt;Enhanced Security&lt;/h3&gt;

&lt;p&gt;With a centralized SoT, security policies and configurations are uniformly applied across the entire network. This reduces vulnerabilities and ensures compliance with regulatory standards.&lt;/p&gt;

&lt;h3 id=&quot;efficiency&quot;&gt;Efficiency&lt;/h3&gt;

&lt;p&gt;Automation scripts and tools can quickly access the SoT to retrieve necessary data, speeding up processes and reducing manual intervention.&lt;/p&gt;

&lt;h3 id=&quot;scalability&quot;&gt;Scalability&lt;/h3&gt;

&lt;p&gt;Maintaining accurate configurations and policies across a more extensive infrastructure becomes challenging as organizations grow. An SoT provides a scalable solution to manage this complexity.&lt;/p&gt;

&lt;h2 id=&quot;challenges-in-establishing-a-source-of-truth&quot;&gt;Challenges in Establishing a Source of Truth&lt;/h2&gt;

&lt;h3 id=&quot;networking-teams&quot;&gt;Networking Teams&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Narrow Focus:&lt;/strong&gt; Networking teams typically have a focused scope, concentrating primarily on network configurations, devices, and connectivity.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Single Source of Truth:&lt;/strong&gt; They often rely on a single SoT, a specific system, server, or database dedicated to network configurations. Tools like Nautobot or NetBox are commonly chosen because they are highly effective for networking.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Fragmentation:&lt;/strong&gt; This narrow focus can lead to silos where the networking SoT doesn’t integrate well with other systems or teams’ SoTs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;systems-teams&quot;&gt;Systems Teams&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Expands Upon Networking Team:&lt;/strong&gt; Systems teams build on the data and configurations managed by the networking teams.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Falls Short:&lt;/strong&gt; Despite their broader scope, systems teams often fail to integrate fully with application and networking layers.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Yet Another Solution:&lt;/strong&gt; They might implement their own SoT solutions, leading to multiple, disconnected sources of truth within the organization.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;application-teams&quot;&gt;Application Teams&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Don’t Care About Anything Upstream:&lt;/strong&gt; Application teams typically focus on ensuring their applications run smoothly, often disregarding the underlying infrastructure and network configurations.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Just Want Their Service To Run (Somewhere):&lt;/strong&gt; Their primary concern is the availability and performance of their services, regardless of where or how they are hosted.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;the-result-no-good-end-to-end-solutions&quot;&gt;The Result: No Good End-To-End Solution(s)&lt;/h3&gt;

&lt;p&gt;The lack of a unified Source of Truth across networking, systems, and application teams results in:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Fragmented data and configurations.&lt;/li&gt;
  &lt;li&gt;Inefficiencies and increased risk of errors.&lt;/li&gt;
  &lt;li&gt;Challenges in achieving holistic automation and seamless operations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;the-role-of-a-configuration-management-database-cmdb&quot;&gt;The Role of a Configuration Management Database (CMDB)&lt;/h2&gt;

&lt;h3 id=&quot;importance-of-cmdb-as-a-source-of-truth&quot;&gt;Importance of CMDB as a Source of Truth&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Centralized Asset Management:&lt;/strong&gt; A CMDB provides a single source for all asset-related information, including hardware, software, network components, and their relationships.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Enhanced Change Management:&lt;/strong&gt; Before changes are implemented, the CMDB can be used to assess the impact on other systems and services, reducing the risk of unintended consequences.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Improved Incident and Problem Management:&lt;/strong&gt; When issues arise, the CMDB can help identify the root cause by tracing the relationships and dependencies between different CIs.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Regulatory Compliance:&lt;/strong&gt; A CMDB helps organizations maintain compliance with industry regulations by providing detailed documentation of assets and their configurations.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Data Accuracy and Integrity:&lt;/strong&gt; Modern CMDBs can integrate with discovery tools to automatically update records, ensuring the information remains current and accurate.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;challenges-with-cmdb-implementation&quot;&gt;Challenges with CMDB Implementation&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Discovery Process Issues:&lt;/strong&gt; The CMDB might have incomplete or inaccurate data if the discovery processes are not correctly implemented or configured.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Maintenance:&lt;/strong&gt; Ensuring the CMDB is kept up-to-date with changes in the IT environment requires continuous effort and automation.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Integration with Other Systems:&lt;/strong&gt; Integrating the CMDB with other IT systems and tools is crucial for a unified SoT but can be technically challenging.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;recommended-approach-for-establishing-a-source-of-truth-in-holistic-automation&quot;&gt;Recommended Approach for Establishing a Source of Truth in Holistic Automation&lt;/h2&gt;

&lt;h3 id=&quot;identify-multiple-sources-of-truth&quot;&gt;Identify Multiple Sources of Truth&lt;/h3&gt;

&lt;h4 id=&quot;greater-flexibility&quot;&gt;Greater Flexibility&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Adaptability:&lt;/strong&gt; Different teams and systems within an organization may have unique requirements and data needs. Identifying multiple SoTs allows for tailored solutions catering to specific contexts while maintaining coherence.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Scalability:&lt;/strong&gt; A multi-source approach can easily scale as the organization grows, accommodating new systems and teams without overhauling the existing SoT infrastructure.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id=&quot;leverage-existing-data-to-benefit-the-business&quot;&gt;Leverage Existing Data to Benefit the Business&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Cost Efficiency:&lt;/strong&gt; Utilizing existing data sources reduces the need for significant investments in new systems. It maximizes the value of current data assets.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Speed to Value:&lt;/strong&gt; Integrating and leveraging current data sources can accelerate the implementation of automation solutions, providing quicker returns on investment.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id=&quot;more-actionable-data&quot;&gt;More Actionable Data&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Comprehensive Insights:&lt;/strong&gt; Aggregating data from multiple SoTs offers a holistic view of the environment, enabling more informed decision-making.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Enhanced Analytics:&lt;/strong&gt; Diverse data sources provide richer datasets for analytics, leading to more accurate predictions and actionable insights.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;automation-effectiveness&quot;&gt;Automation Effectiveness&lt;/h3&gt;

&lt;h4 id=&quot;api-first-approach&quot;&gt;API-First Approach&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Interoperability:&lt;/strong&gt; Adopting an API-first strategy ensures that different systems and SoTs can communicate seamlessly, facilitating data exchange and integration.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Future-Proofing:&lt;/strong&gt; APIs provide a flexible framework that can adapt to new technologies and changing business needs, ensuring the longevity and relevance of automation solutions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id=&quot;data-federation-and-transformation&quot;&gt;Data Federation and Transformation&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Unified View:&lt;/strong&gt; Data federation techniques aggregate data from various SoTs into a coherent view. This unified perspective is crucial for effective management and decision-making.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Data Quality:&lt;/strong&gt; Transformation processes standardize and cleanse data from different sources, ensuring consistency, accuracy, and reliability.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id=&quot;unified-view&quot;&gt;Unified View&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;End-to-End Visibility:&lt;/strong&gt; A unified view allows for comprehensive monitoring and management of the entire IT landscape, from networking and systems to applications.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Streamlined Operations:&lt;/strong&gt; Centralized visibility and control streamline operations, reduce complexity, and improve efficiency across the organization.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;real-world-example-networking-teams-and-nautobotnetbox&quot;&gt;Real-World Example: Networking Teams and Nautobot/NetBox&lt;/h2&gt;

&lt;p&gt;A typical scenario in many organizations is the networking team selecting tools like Nautobot or NetBox as their Source of Truth for network configurations. These tools are highly effective for networking, providing detailed and reliable data on network devices, configurations, and topologies. However, this choice can lead to silos because networking teams might not consider how their SoT integrates with the SoTs of systems or application teams.&lt;/p&gt;

&lt;p&gt;For instance, while Nautobot or NetBox excels at managing network configurations, systems teams might require additional data on server configurations, virtual machines, and storage systems. Similarly, application teams may focus on ensuring their services run smoothly without delving into the underlying infrastructure managed by the networking teams. This disconnect creates a fragmented landscape, making holistic automation challenging.&lt;/p&gt;

&lt;p&gt;Adopting an integrative approach that includes APIs and data federation techniques is crucial to bridge this gap. This will ensure that data from tools like Nautobot or NetBox can seamlessly integrate with other SoTs across the organization. This unified approach enhances visibility and management and drives more effective and efficient automation processes.&lt;/p&gt;

&lt;h2 id=&quot;leveraging-source-control-as-a-source-of-truth&quot;&gt;Leveraging Source Control as a Source of Truth&lt;/h2&gt;

&lt;p&gt;Many organizations and teams leverage source control systems, like Git, as their Source of Truth for configurations and infrastructure as code. This approach centralizes version control, collaboration, and auditing. However, using source control as an SoT at scale presents challenges:&lt;/p&gt;

&lt;h3 id=&quot;complexity-management&quot;&gt;Complexity Management&lt;/h3&gt;

&lt;p&gt;As the number of repositories, branches, and contributors grows, consistency and preventing configuration drift becomes more challenging.&lt;/p&gt;

&lt;h3 id=&quot;integration&quot;&gt;Integration&lt;/h3&gt;

&lt;p&gt;Ensuring seamless integration between source control and operational systems requires robust CI/CD pipelines and automation tools.&lt;/p&gt;

&lt;h3 id=&quot;real-time-updates&quot;&gt;Real-Time Updates&lt;/h3&gt;

&lt;p&gt;Source control systems are excellent for tracking changes over time but may lag in providing real-time state information compared to specialized configuration management databases (CMDBs).&lt;/p&gt;

&lt;h2 id=&quot;dynamic-source-of-truth-ensuring-accuracy&quot;&gt;Dynamic Source of Truth: Ensuring Accuracy&lt;/h2&gt;

&lt;p&gt;Having a dynamic Source of Truth is the ideal scenario for many organizations. A dynamic SoT continuously updates to reflect the current state of the infrastructure, providing real-time data for automation and decision-making. However, ensuring the accuracy of a dynamic SoT presents unique challenges:&lt;/p&gt;

&lt;h3 id=&quot;seeding-the-dynamic-sot&quot;&gt;Seeding the Dynamic SoT&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Initial Data Load:&lt;/strong&gt; To seed the dynamic system, begin with a reliable, static SoT. This could be an existing CMDB or a well-maintained source control repository.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Validation:&lt;/strong&gt; Thoroughly validate and cleanse the initial data to ensure accuracy before it becomes the basis for dynamic updates.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;continuous-verification&quot;&gt;Continuous Verification&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Automated Audits:&lt;/strong&gt; Implement automated audits to regularly verify the data in the dynamic SoT against actual system states. Tools like Ansible or Terraform can assist in reconciling discrepancies.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Anomaly Detection:&lt;/strong&gt; Use machine learning and anomaly detection techniques to identify and alert on data inconsistencies.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;source-redundancy&quot;&gt;Source Redundancy&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Multiple Data Sources:&lt;/strong&gt; Leverage various data sources to cross-verify information. For example, both network monitoring tools and CMDBs can be used to validate network configurations.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Fallback Mechanisms:&lt;/strong&gt; Establish fallback mechanisms to revert to the last known good state in case of data corruption or significant discrepancies.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;governance-and-policies&quot;&gt;Governance and Policies&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Data Governance:&lt;/strong&gt; Establish clear policies defining how data is added, updated, and validated within the dynamic SoT.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Role-Based Access Control:&lt;/strong&gt; Implement role-based access control to ensure only authorized personnel can change the SoT.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;leveraging-servicenow-as-a-source-of-truth&quot;&gt;Leveraging ServiceNow as a Source of Truth&lt;/h2&gt;

&lt;p&gt;Some organizations and teams prefer ServiceNow as their Source of Truth due to its comprehensive IT service management (ITSM), asset management, and workflow automation capabilities. ServiceNow can serve as an effective SoT, but it also comes with its own set of challenges and benefits:&lt;/p&gt;

&lt;h3 id=&quot;comprehensive-asset-management&quot;&gt;Comprehensive Asset Management&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Single Repository:&lt;/strong&gt; ServiceNow can centralize information about IT assets, incidents, changes, and configurations in a single repository, providing a holistic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;view.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Integration Capabilities:&lt;/strong&gt; ServiceNow’s robust integration capabilities enable it to connect with other systems and sources, ensuring that data is consolidated and synchronized across the IT landscape.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;workflow-automation&quot;&gt;Workflow Automation&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Streamlined Processes:&lt;/strong&gt; ServiceNow can automate workflows for incident management, change management, and other ITSM processes, ensuring that data in the SoT is continuously updated and accurate.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Approval Workflows:&lt;/strong&gt; Implementing approval workflows ensures that changes to the SoT are reviewed and authorized, maintaining data integrity.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;real-time-updates-and-accuracy&quot;&gt;Real-Time Updates and Accuracy&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Event-Driven Updates:&lt;/strong&gt; ServiceNow can be configured to update its records based on real-time events and monitoring data, ensuring that the SoT reflects the current state of the infrastructure.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Periodic Audits:&lt;/strong&gt; Regular audits and reconciliation processes can be automated within ServiceNow to verify data accuracy and detect discrepancies.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;challenges-with-servicenow-discovery&quot;&gt;Challenges with ServiceNow Discovery&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Implementation Issues:&lt;/strong&gt; Many teams rely on ServiceNow’s discovery processes to populate and maintain their SoT. However, these processes are only sometimes implemented correctly, leading to incomplete or inaccurate data.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Complex Environments:&lt;/strong&gt; In complex IT environments, ensuring that the discovery processes accurately reflect the state of all assets and configurations can be challenging. Misconfigurations or missed assets can result in an unreliable SoT.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Customization Needs:&lt;/strong&gt; ServiceNow discovery processes may require significant customization to fit an organization’s specific needs, adding to their complexity and potential for errors.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;future-trends&quot;&gt;Future Trends&lt;/h2&gt;

&lt;h3 id=&quot;opsmill-infrahub-and-its-potential&quot;&gt;Opsmill Infrahub and Its Potential&lt;/h3&gt;

&lt;p&gt;It will be interesting to see how Opsmill Infrahub manages sources of truth in holistic automation. Although I have yet to have the time to investigate it, it looks promising and could offer new ways to streamline and integrate SoTs across diverse environments.&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;Establishing a flexible, integrated Source of Truth is pivotal for achieving holistic automation. Organizations can enhance their automation effectiveness by identifying multiple SoTs, leveraging existing data, adopting an API-first approach, driving actionable insights, and achieving greater business success. Embrace this recommended approach to navigate modern IT environments’ complexities and unlock holistic automation’s full potential.&lt;/p&gt;</content><author><name>Larry Smith Jr.</name></author><category term="Automation" /><category term="IT Management" /><category term="DevOps" /><category term="Source of Truth" /><category term="CMDB" /><category term="ServiceNow" /><category term="Nautobot" /><category term="NetBox" /><category term="Opsmill Infrahub" /><summary type="html">The Source of Truth (SoT) concept has become a cornerstone for achieving holistic automation in the ever-evolving IT and network management landscape. As organizations strive to streamline operations, ensure compliance, and enhance security, the quest for an accurate, reliable, and comprehensive SoT becomes paramount. This blog post delves into identifying and establishing a robust SoT and its implications for holistic automation. And you may be shocked to know: It’ll likely not be a Single Source of Truth (SSoT).</summary></entry><entry><title type="html">Transforming IT Operations - The Rise of Infrastructure Automation Consulting</title><link href="https://everythingshouldbevirtual.com/transforming-it-operations-the-rise-of-infrastructure-automation-consulting/" rel="alternate" type="text/html" title="Transforming IT Operations - The Rise of Infrastructure Automation Consulting" /><published>2024-06-29T00:00:00-04:00</published><updated>2024-06-29T00:00:00-04:00</updated><id>https://everythingshouldbevirtual.com/transforming-it-operations-the-rise-of-infrastructure-automation-consulting</id><content type="html" xml:base="https://everythingshouldbevirtual.com/transforming-it-operations-the-rise-of-infrastructure-automation-consulting/">&lt;p&gt;As IT environments grow in complexity and scale, efficiently managing these intricate systems has become a critical challenge for many businesses. This is where infrastructure automation consulting plays a pivotal role. While I no longer function directly as an automation engineer, I lead automation engineers on various projects, guiding them to implement cutting-edge solutions that streamline and enhance IT operations. After a hiatus of over three years from blogging, I’m excited to share the transformative impact of infrastructure automation consulting and how it can revolutionize IT infrastructure management based on my recent experiences and advancements in the field.&lt;/p&gt;

&lt;h2 id=&quot;what-is-infrastructure-automation-consulting&quot;&gt;What is Infrastructure Automation Consulting?&lt;/h2&gt;

&lt;p&gt;Infrastructure automation consulting uses technology to manage physical and virtual environments automatically. This consulting branch assists businesses in deploying software that handles the setup, management, and operation of IT infrastructure components such as servers, storage devices, and network elements. Automation technologies typically involve scripts, tools, and platforms that eliminate manual processes, reduce human error, and enhance service delivery.&lt;/p&gt;

&lt;h2 id=&quot;the-benefits-of-infrastructure-automation-consulting&quot;&gt;The Benefits of Infrastructure Automation Consulting&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Speed and Efficiency&lt;/strong&gt;: Automation accelerates numerous IT processes, including server provisioning, configuration management, and patch updates, dramatically speeding up deployment times and reducing downtime.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Cost Reduction&lt;/strong&gt;: Minimizing manual interventions reduces labor costs and operational expenses associated with manual errors and rework.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Consistency and Compliance&lt;/strong&gt;: Automated workflows ensure tasks are executed the same manner every time, aiding compliance with industry standards and company policies.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: Automation makes it easier to scale IT operations to handle increased load without additional complexity.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Improved Security&lt;/strong&gt;: Automating security configurations and updates ensures defenses are consistently applied across the entire infrastructure.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;my-role-in-automation-consulting&quot;&gt;My Role in Automation Consulting&lt;/h2&gt;

&lt;p&gt;As a technical consultant within automation services, I sometimes engage in early presales consulting to help clients understand the potential benefits and implications of automation for their IT infrastructure. This role allows me to combine strategic oversight with technical expertise, guiding projects’ discovery, analysis, and design phases. My primary responsibility is to ensure that the solutions designed adhere to technical specifications and align with broader business outcomes, facilitating a seamless transition into development sprints where I continue to provide crucial oversight and guidance.&lt;/p&gt;

&lt;h2 id=&quot;establishing-automation-centers-of-excellence&quot;&gt;Establishing Automation Centers of Excellence&lt;/h2&gt;

&lt;p&gt;One critical aspect of my role involves helping clients establish Automation Centers of Excellence (CoEs). These centers are crucial for setting standards, sharing best practices, and providing organizational training and resources. By establishing a CoE, I ensure that automation efforts are not just isolated projects but part of a broader strategic initiative that fosters innovation and continuous improvement in automation practices. This approach helps organizations achieve long-term success in their automation journeys, enhancing efficiency, reducing costs, and improving service quality across the board.&lt;/p&gt;

&lt;h2 id=&quot;balancing-leadership-and-technical-passion&quot;&gt;Balancing Leadership and Technical Passion&lt;/h2&gt;

&lt;p&gt;While I sometimes miss the hands-on technical development work, my role has evolved to focus more on leadership and strategic guidance. To ensure the success of our projects, I lean on a team of highly skilled engineers who handle the development aspects. This collaboration allows me to focus on overarching project goals and client needs while staying connected to the technical aspects that sparked my passion for this field. It’s a balance that emphasizes the importance of teamwork and expertise, ensuring I deliver the best solutions to my clients.&lt;/p&gt;

&lt;h2 id=&quot;hybrid-cloud-solutions&quot;&gt;Hybrid Cloud Solutions&lt;/h2&gt;

&lt;p&gt;Much of my consulting work involves implementing and managing hybrid cloud solutions. Hybrid environments, combining on-premises and cloud infrastructure, present unique challenges and opportunities for automation. Integrating automation tools such as Ansible and Terraform, I help clients achieve seamless management across their hybrid landscapes. This approach enhances flexibility and scalability and ensures consistent deployments and operations across all platforms, driving efficiency while maintaining security and compliance.&lt;/p&gt;

&lt;h2 id=&quot;clarifying-client-visions&quot;&gt;Clarifying Client Visions&lt;/h2&gt;

&lt;p&gt;In my consulting role, I often work closely with clients with a vision for enhancing their IT operations but may need help articulating or defining it clearly. My task is to assist in bridging this gap by translating vague ideas into actionable strategic plans. Through a collaborative process, we define clear objectives, select the appropriate technologies, and craft a phased implementation strategy that aligns with their business goals. This ensures that the envisioned automation solution meets and exceeds their expectations, facilitating a transformative impact on their operations.&lt;/p&gt;

&lt;h2 id=&quot;gap-analysis-in-automation-landscapes&quot;&gt;Gap Analysis in Automation Landscapes&lt;/h2&gt;

&lt;p&gt;A critical component of my role involves conducting a comprehensive gap analysis of a client’s current automation landscapes. This analysis helps identify shortcomings in existing systems, areas where new automation tools can be integrated, and opportunities for enhancing efficiencies. By pinpointing these gaps, targeted recommendations are made that pave the way for refined automation strategies. This ensures clients can modernize their infrastructure effectively and maximize their return on investment in automation technologies.&lt;/p&gt;

&lt;h2 id=&quot;implementing-site-reliability-engineering-sre-and-aiml-initiatives&quot;&gt;Implementing Site Reliability Engineering (SRE) and AI/ML Initiatives&lt;/h2&gt;

&lt;p&gt;In addition to infrastructure automation services, I assist clients in defining and implementing Site Reliability Engineering (SRE) within their teams and organizations. SRE is a discipline that incorporates aspects of software engineering and applies them to infrastructure and operations problems. The goal is to create scalable and highly reliable software systems. By introducing SRE practices, I help organizations enhance their operational capabilities, automate where possible, and improve service reliability. Furthermore, I work on projects that involve AI and machine learning, utilizing these technologies further to enhance automation and predictive capabilities within IT operations. This strategic integration ensures that operational responsibilities are shared among various teams, thereby fostering a culture of efficiency and continuous improvement.&lt;/p&gt;

&lt;h2 id=&quot;integrating-gitops-and-devops-practices&quot;&gt;Integrating GitOps and DevOps Practices&lt;/h2&gt;

&lt;p&gt;In addition to my core responsibilities in infrastructure automation, I also work extensively within GitOps and DevOps frameworks. GitOps is an operational framework that uses DevOps best practices for application development, such as version control, collaboration, compliance, and CI/CD and applies them to infrastructure automation. By leveraging GitOps, I help organizations streamline their infrastructure’s deployment, management, and maintenance through code, which increases efficiency and reduces the risk of human error. DevOps principles form the backbone of my approach to bridging the gap between development, operations, and quality assurance teams. This collaboration accelerates the pace at which products are developed and deployed, enhances response to customer feedback, and improves the quality of software solutions. Embracing these methodologies allows me to ensure that the automation strategies I design are robust, scalable, and aligned with continuous integration and continuous delivery practices.&lt;/p&gt;

&lt;h2 id=&quot;mentoring-the-next-generation-of-automation-engineers&quot;&gt;Mentoring the Next Generation of Automation Engineers&lt;/h2&gt;

&lt;p&gt;In addition to my project leadership and consulting roles, I also take pride in mentoring automation engineers. This aspect of my work allows me to share the knowledge and insights I’ve gained over the years with emerging talents in the industry. By mentoring, I help engineers refine their technical skills and develop a strategic mindset necessary for innovation and problem-solving in automation. This contribution ensures that the field continues to evolve and adapt, driven by well-rounded professionals equipped to tackle the challenges of modern IT environments.&lt;/p&gt;

&lt;h2 id=&quot;enhancing-it-operations-with-observability&quot;&gt;Enhancing IT Operations with Observability&lt;/h2&gt;

&lt;p&gt;Observability is critical to modern IT operations, enabling organizations to monitor, track, and analyze data across their infrastructure to make informed decisions. As part of my role in infrastructure automation consulting, I emphasize the importance of implementing comprehensive observability solutions beyond traditional monitoring. These solutions provide deep insights into the health and performance of IT systems, allowing for proactive management and swift resolution of issues before they impact business operations. By utilizing advanced tools and practices, such as real-time analytics, log aggregation, and AI-driven anomaly detection, I help clients achieve high transparency and control over their automated environments. This ensures that systems are efficient, compliant, resilient, and adaptive to changes.&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;Infrastructure automation consulting offers a significant advantage by optimizing how IT environments are managed. By embracing automated solutions, companies can enhance their IT agility, reduce costs, and improve overall service quality. Whether you want to streamline data center operations, strengthen network security, or ensure consistent system configurations, infrastructure automation is a key to modern IT efficiency.&lt;/p&gt;</content><author><name>Larry Smith Jr.</name></author><summary type="html">As IT environments grow in complexity and scale, efficiently managing these intricate systems has become a critical challenge for many businesses. This is where infrastructure automation consulting plays a pivotal role. While I no longer function directly as an automation engineer, I lead automation engineers on various projects, guiding them to implement cutting-edge solutions that streamline and enhance IT operations. After a hiatus of over three years from blogging, I’m excited to share the transformative impact of infrastructure automation consulting and how it can revolutionize IT infrastructure management based on my recent experiences and advancements in the field.</summary></entry><entry><title type="html">NFD24 - DriveNets</title><link href="https://everythingshouldbevirtual.com/NFD24-DriveNets/" rel="alternate" type="text/html" title="NFD24 - DriveNets" /><published>2021-03-02T09:00:00-05:00</published><updated>2021-03-02T09:00:00-05:00</updated><id>https://everythingshouldbevirtual.com/NFD24-DriveNets</id><content type="html" xml:base="https://everythingshouldbevirtual.com/NFD24-DriveNets/">&lt;p&gt;Recently I had the privilege to attend &lt;a href=&quot;https://techfieldday.com/event/nfd24/&quot;&gt;NFD 24&lt;/a&gt;
in which &lt;a href=&quot;https://drivenets.com&quot;&gt;DriveNets&lt;/a&gt; presented. I had never heard of them
until this event. Which I am shaming myself for because they are doing some
amazing things around transforming the network market.&lt;/p&gt;

&lt;p&gt;One of the primary things that caught my attention was around what they are
doing by disaggregating network functions. This is something that I have experienced
in some capacity or another over the past few years. And when I understood what
they were doing, I was immediately interested.&lt;/p&gt;

&lt;p&gt;They have two products in which I will touch on in this post. One being &lt;a href=&quot;https://drivenets.com/products/dnos/&quot;&gt;DNOS&lt;/a&gt; and the other is &lt;a href=&quot;https://drivenets.com/products/dnor/&quot;&gt;DNOR&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;dnos---drivenets-network-operating-system&quot;&gt;DNOS - DriveNets Network Operating System&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://drivenets.com/products/dnos/&quot;&gt;DNOS&lt;/a&gt; is a fully featured networking
stack that runs on servers and white boxes. DNOS leverages many
cloud and virtualization technologies to make this a fully distributed OS running
as Docker containers in your network. DNOS separates the data plane on a cluster
of white boxes and the control plane on any environment supporting containers.
Therefore, disaggregating the data and control plane for massive scale.&lt;/p&gt;

&lt;h2 id=&quot;dnor---drivenets-network-orchestrator&quot;&gt;DNOR - DriveNets Network Orchestrator&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://drivenets.com/products/dnor/&quot;&gt;DNOR&lt;/a&gt; is an orchestrator in which automates
the deployment, scaling and management of the DriveNets Cloud solution. Therefore
accelerating the deployment of cloud-native networks.&lt;/p&gt;

&lt;p&gt;Because of these automated operations provided by DNOR, life-cycle management of
things such as: Zero-touch provisioning, modular software orchestration, and
scale-up and scale-down are operationalized. Providing benefits such as: shorter
maintenance windows, simplified operations, and improved reliability.&lt;/p&gt;

&lt;p&gt;Another benefit of DNOR is complete visibility of your network. This means that
such things as: cluster views, availability, and performance within hardware and
software is visible. In turn, this assists in troubleshooting issues faster
and ensuring that network performance and availability are proactively monitored.&lt;/p&gt;

&lt;h2 id=&quot;target-audience&quot;&gt;Target Audience&lt;/h2&gt;

&lt;p&gt;From what I gathered at this point, it appears that service providers are the
target audience for DriveNets. With that being said, &lt;a href=&quot;https://about.att.com/story/2020/open_disaggregated_core_router.html&quot;&gt;AT&amp;amp;T&lt;/a&gt; became the first in
the industry to implement DriveNets disaggregated core routing platform. This is
obviously a huge win for DriveNets, and they definitely did a great job highlighting
this during their presentation. Huge props to the DriveNets team on this for sure.&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;Although I have only highlighted two of DriveNets products here, I plan on digging
more into their products more in depth as time permits. As I mentioned at the
start of this post, I’ve been exposed to a few implementations over the past
several years where DriveNets would have been a game changer. So, I’ll definitely
be keeping an eye on what they are doing over time.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;DISCLAIMER: I have been invited to Network Field Day Exclusive by Gestalt IT
who paid for travel, hotel, meals and transportation. I did not receive any
compensation to attend NFD and I am under no obligation whatsoever to write any
content related to NFD. The contents of these blog posts represent my personal
opinions about the products and solutions presented during NFD.&lt;/p&gt;
&lt;/blockquote&gt;</content><author><name>Larry Smith Jr.</name></author><summary type="html">Recently I had the privilege to attend NFD 24 in which DriveNets presented. I had never heard of them until this event. Which I am shaming myself for because they are doing some amazing things around transforming the network market.</summary></entry><entry><title type="html">Ubuntu 20.04 - cloud-init Gotchas</title><link href="https://everythingshouldbevirtual.com/Ubuntu-20.04-cloud-init-gotchas/" rel="alternate" type="text/html" title="Ubuntu 20.04 - cloud-init Gotchas" /><published>2020-08-25T22:41:00-04:00</published><updated>2020-08-25T22:41:00-04:00</updated><id>https://everythingshouldbevirtual.com/Ubuntu-20.04-cloud-init-gotchas</id><content type="html" xml:base="https://everythingshouldbevirtual.com/Ubuntu-20.04-cloud-init-gotchas/">&lt;p&gt;Recently while working on my latest &lt;a href=&quot;https://github.com/mrlesmithjr/packer-templates-revisited&quot;&gt;Packer Templates&lt;/a&gt;
I ran into an issue with Ubuntu 20.04. The issue was related to &lt;a href=&quot;https://cloud-init.io/&quot;&gt;cloud-init&lt;/a&gt;
not being able to grow the root partition nor change the hostname. I was testing
this on &lt;a href=&quot;https://www.proxmox.com/en/&quot;&gt;Proxmox&lt;/a&gt; using Terraform.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;NOTE: &lt;a href=&quot;https://github.com/Telmate/terraform-provider-proxmox&quot;&gt;terraform-provider-proxmox&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Obviously these are two very important capabilities when provisioning cloud
instances. So, why was this not working? Well I started Googling and of course
nothing was coming up that would give me a clue. So, I started digging into logs, etc.
And sure enough, I found in the log that these two capabilities were not working
because of the file &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/cloud/cloud.cfg.d/99-installer.cfg&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;An example of the contents of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc/cloud/cloud.cfg.d/99-installer.cfg&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;datasource:
  None:
    metadata: &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;instance-id: 872e2bc0-9805-4623-bdda-5e8bcca540dc&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
    userdata_raw: &lt;span class=&quot;s2&quot;&gt;&quot;#cloud-config&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\n&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;growpart: {mode: &apos;off&apos;}&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\n&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;locale: en_US.UTF-8&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\n&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;preserve_hostname:&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\ &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;true&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\n&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;resize_rootfs: false&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\n&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;users:&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\n&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;- gecos: packer&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\n&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;  groups: [adm, cdrom,&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\ &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;dip, plugdev, lxd, sudo]&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\n&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;  lock_passwd: false&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\n&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;  name: packer&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\n&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;  passwd:&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\ &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$6$AA&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;.Jw829.bXpJ4w&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$bf2mI99OoUo2F4&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;/rSfnAD9vNg2vjOiJaynMSeOgZcE3PB/OMCRgYuon74mIyzgUiXBEA8/VluqEQuZBGcQq5B.&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\n\&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;
      &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\ &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt; shell: /bin/bash&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\n&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
datasource_list: &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;None]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;From the above example you can see that &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;growpart: {mode: &apos;off}&lt;/code&gt; and
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;preserve_hostname: true&lt;/code&gt; are set. This right here was the cause of my issues.
So, to resolve this issue. Simply delete the file and you are good to go!&lt;/p&gt;

&lt;p&gt;Just sharing this with folks in case you ever run into this as well.&lt;/p&gt;

&lt;p&gt;Enjoy!&lt;/p&gt;</content><author><name>Larry Smith Jr.</name></author><summary type="html">Recently while working on my latest Packer Templates I ran into an issue with Ubuntu 20.04. The issue was related to cloud-init not being able to grow the root partition nor change the hostname. I was testing this on Proxmox using Terraform.</summary></entry><entry><title type="html">CFD7 - VMware TKG</title><link href="https://everythingshouldbevirtual.com/CFD7-VMware-TKG/" rel="alternate" type="text/html" title="CFD7 - VMware TKG" /><published>2020-05-05T01:00:00-04:00</published><updated>2020-05-05T01:00:00-04:00</updated><id>https://everythingshouldbevirtual.com/CFD7-VMware-TKG</id><content type="html" xml:base="https://everythingshouldbevirtual.com/CFD7-VMware-TKG/">&lt;p&gt;Recently I had the pleasure to attend &lt;a href=&quot;https://techfieldday.com/event/cfd7/&quot;&gt;#CFD7&lt;/a&gt;
in which VMware presented &lt;a href=&quot;https://tanzu.vmware.com/kubernetes-grid&quot;&gt;VMware Tanzu Kubernetes Grid&lt;/a&gt;. Our friend, &lt;a href=&quot;https://twitter.com/kendrickcoleman&quot;&gt;Kendrick Coleman&lt;/a&gt; did a great job presenting TKG to us. Even squashed a few
questions/concerns along the way. Of course, not all of them were squashed but..&lt;/p&gt;

&lt;!-- Courtesy of embedresponsively.com //--&gt;

&lt;div class=&quot;responsive-video-container&quot;&gt;
    &lt;iframe src=&quot;https://player.vimeo.com/video/411486627?dnt=true&quot; frameborder=&quot;0&quot; webkitallowfullscreen=&quot;&quot; mozallowfullscreen=&quot;&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;
  &lt;/div&gt;

&lt;blockquote&gt;
  &lt;p&gt;NOTE: Current version of Kubernetes supported as of #CFD7 - 1.17.3&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2 id=&quot;what-is-tkg&quot;&gt;What Is TKG&lt;/h2&gt;

&lt;p&gt;Directly from the VMware TKG docs, TKG can be summed up as:&lt;/p&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;VMware Tanzu™ Kubernetes Grid™ provides organizations with a consistent, upstream-compatible, regional Kubernetes substrate across software-defined datacenters (SDDC) and public cloud environments, that is ready for end-user workloads and ecosystem integrations. Tanzu Kubernetes Grid builds on trusted upstream and community projects and delivers a Kubernetes platform that is engineered and supported by VMware, so that you do not have to build your Kubernetes environment by yourself.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;TKG is the core component of Kubernetes deployments. This component applies to
all implementations. TKG uses Kubernetes to deploy Kubernetes. Ummm What? Yep,
you heard that! WOW! I can hear the comments of how complex this sounds. However,
it does sound like it. But it actually makes sense to ensure a consistent deployment
across all implementations. But, it is very complex and not for the faint at
heart. The reality is, if they never showed TKG CLI and the underlying constructs,
no one would likely question the complexity. But here we are!&lt;/p&gt;

&lt;p&gt;TKG extends core Kubernetes by using a Custom Resource Definition(CRD). This CRD defines specific resources that native Kubernetes does not know about by using &lt;a href=&quot;https://github.com/kubernetes-sigs/cluster-api&quot;&gt;Cluster API&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;The Cluster API is a Kubernetes project to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management. It provides optional, additive functionality on top of core Kubernetes.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;../../images/2020/05/2020-05-10-16-44-12.png&quot; alt=&quot;VMware CRD - Cluster API&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;architecture&quot;&gt;Architecture&lt;/h2&gt;

&lt;p&gt;When it comes to the architecture of TKG, we can see that there is a lot of
bingo going on here. But hey, it takes a lot to make this work!&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;../../images/2020/05/2020-05-10-16-58-44.png&quot; alt=&quot;TKG-Architecture&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Core Components:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;idP Auth - &lt;a href=&quot;https://github.com/dexidp/dex&quot;&gt;dex&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Monitoring&lt;/li&gt;
  &lt;li&gt;Logging/Monitoring - &lt;a href=&quot;https://fluentbit.io/&quot;&gt;Fluentbit&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Cluster Lifecycle - &lt;a href=&quot;https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/&quot;&gt;kubeadm&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Container Registry - &lt;a href=&quot;https://goharbor.io/&quot;&gt;Harbor&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Ingress - &lt;a href=&quot;https://projectcontour.io/&quot;&gt;Contour&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Lifecycle Management - &lt;a href=&quot;https://github.com/kubernetes-sigs/cluster-api&quot;&gt;Cluster API&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additional Components:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Runtime - &lt;a href=&quot;https://containerd.io/&quot;&gt;Containerd&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Networking CNI - &lt;a href=&quot;https://docs.projectcalico.org/getting-started/kubernetes/&quot;&gt;Calico&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Crash Diagnostics&lt;/li&gt;
  &lt;li&gt;Provided OVA and AMI Images&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src=&quot;../../images/2020/05/2020-05-10-17-04-11.png&quot; alt=&quot;TKG - Component Internals&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;implementations&quot;&gt;Implementations&lt;/h2&gt;

&lt;p&gt;VMware TKG comes in three different flavors (personas):&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.0/vmware-tanzu-kubernetes-grid-10/GUID-index.html&quot;&gt;Standalone Tanzu Kubernetes Grid&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-kubernetes/GUID-7E00E7C2-D1A1-4F7D-9110-620F30C02547.html&quot;&gt;VMware Tanzu™ Kubernetes Grid™ service for vSphere&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://docs.vmware.com/en/VMware-Tanzu-Mission-Control/index.html&quot;&gt;VMware Tanzu™ Mission Control™&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;standalone-tanzu-kubernetes-grid&quot;&gt;Standalone Tanzu Kubernetes Grid&lt;/h2&gt;

&lt;p&gt;At the core of TKG, we get TKG CLI.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;NOTE: You can download TKG CLI from &lt;a href=&quot;https://www.vmware.com/go/get-tkg&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Creating new TKG clusters is as simple (Not tested) as executing:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;tkg create cluster &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;name] &lt;span class=&quot;nt&quot;&gt;--plan&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;production
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;img src=&quot;../../images/2020/05/2020-05-10-22-41-15.png&quot; alt=&quot;TKG - Standalone&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The following platforms are supported to deploy to:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;vSphere 6.7u3&lt;/li&gt;
  &lt;li&gt;vSphere 7.0 (see below)&lt;/li&gt;
  &lt;li&gt;Amazon EC2&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;vsphere-70&quot;&gt;vSphere 7.0&lt;/h3&gt;

&lt;p&gt;With vSphere 7.0, you do not need to deploy TKG management clusters if you
enabled the vSphere with Kubernetes feature. This is because you can use the TKG
CLI to connect directly to the Supervisor Cluster available when this feature
is enabled.&lt;/p&gt;

&lt;p&gt;However, if the Kubernetes feature is not enabled. You can use TKG CLI to deploy
a management cluster, but it is not supported. The process to do so will be
identical to vSphere 6.7u3.&lt;/p&gt;

&lt;h2 id=&quot;vmware-tanzu-kubernetes-grid-service-for-vsphere&quot;&gt;VMware Tanzu™ Kubernetes Grid™ service for vSphere&lt;/h2&gt;

&lt;p&gt;TKG for vSphere was originally called Project Pacific when announced at VMworld 2019. A Tanzu Kubernetes Grid cluster, runs as virtual machines at the Supervisor
layer of vSphere. This service is enabled as a feature on vSphere 7.0 (see above).&lt;/p&gt;

&lt;p&gt;TKG for vSphere makes a lot of sense for the standard vSphere admin. As it brings
the Kubernetes constructs into vCenter in traditional-ish ways. I personally
feel this will likely be where we see a lot of deployments occuring. But, we
shall see over time of course.&lt;/p&gt;

&lt;h2 id=&quot;vmware-tanzu-mission-control&quot;&gt;VMware Tanzu™ Mission Control™&lt;/h2&gt;

&lt;p&gt;What is Tanzu Mission Control? Well the following pretty much sums that up:&lt;/p&gt;

&lt;p&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Tanzu Mission Control helps organizations to overcome the challenge of managing a fleet of Kubernetes clusters on-premises, in the cloud and from multiple vendors.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;../../images/2020/05/2020-05-10-22-45-15.png&quot; alt=&quot;TKG - Mission Control&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Tanzu Mission Control was under development by Heptio prior to their aquisition
by VMware.&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;So, in conclusion here. I wanted to just quickly note some of the elements that
were touched on during the short 30 minute session we had. I’ll likely very soon
be exploring more in depth, the various TKG concepts. But until then, I’ll continue
to handle my own automated Kubernetes deployments as I’ve done for a few years now.&lt;/p&gt;

&lt;h2 id=&quot;follow-up&quot;&gt;Follow UP&lt;/h2&gt;

&lt;p&gt;One question I asked during the session was, how does TKG CLI function in a
CI/CD pipeline. After watching back the session, I don’t think where I was
going was very clear. So, I’ll attempt to add a bit more context here.&lt;/p&gt;

&lt;p&gt;My question came from the perspective of understanding that TKG CLI can perform
a one time provisioning of a management cluster with no issues. However, if
I am managing my complete infrastructure as code. And leveraging pipelines that
continously run. I need to ensure that when a management cluster is required. I
do not attempt this on each iteration of pipeline executions. My question was
asked from an idempotent context, which made absolutely no sense :( Because after
listening back and digging into TKG more, the process follows the declarative
manner of Kubernetes already.&lt;/p&gt;

&lt;p&gt;So, if I were to answer my question on where I was headed. I’d answer with,
You need to first check whether the management cluster is already available and
functional. If it is not, then provision. However, if it is already available,
skip. I know this sounds ridiculous, but for whatever reason when I listen to
things like this. I automatically jump to, how does this look from a holistic
view when doing full datacenter automation.&lt;/p&gt;

&lt;h2 id=&quot;additional-cfd7-resources&quot;&gt;Additional CFD7 Resources&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.eigenmagic.com/2020/05/11/vmware-makes-kubernetes-even-more-so-with-tanzu/&quot;&gt;https://www.eigenmagic.com/2020/05/11/vmware-makes-kubernetes-even-more-so-with-tanzu/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
  &lt;p&gt;DISCLAIMER: I have been invited to Cloud Field Day Exclusive by Gestalt IT who
paid for travel, hotel, meals and transportation. I did not receive any
compensation to attend CFD and I am under no obligation whatsoever to write any
content related to CFD. The contents of these blog posts represent my personal
opinions about the products and solutions presented during CFD.&lt;/p&gt;
&lt;/blockquote&gt;</content><author><name>Larry Smith Jr.</name></author><summary type="html">Recently I had the pleasure to attend #CFD7 in which VMware presented VMware Tanzu Kubernetes Grid. Our friend, Kendrick Coleman did a great job presenting TKG to us. Even squashed a few questions/concerns along the way. Of course, not all of them were squashed but..</summary></entry><entry><title type="html">Updating Git Project Structure</title><link href="https://everythingshouldbevirtual.com/Updating-Git-Project-Structure/" rel="alternate" type="text/html" title="Updating Git Project Structure" /><published>2020-02-20T13:00:00-05:00</published><updated>2020-02-20T13:00:00-05:00</updated><id>https://everythingshouldbevirtual.com/Updating-Git-Project-Structure</id><content type="html" xml:base="https://everythingshouldbevirtual.com/Updating-Git-Project-Structure/">&lt;h2 id=&quot;introduction&quot;&gt;Introduction&lt;/h2&gt;

&lt;p&gt;Creating and maintaining a consistent project structure is crucial for efficient collaboration and automation. In this post, I’ll share how I’ve been using a &lt;a href=&quot;https://cookiecutter.readthedocs.io/&quot; title=&quot;https://cookiecutter.readthedocs.io/&quot;&gt;Cookiecutter&lt;/a&gt; template to streamline the creation of new &lt;a href=&quot;https://ansible.com/&quot; title=&quot;https://ansible.com/&quot;&gt;Ansible&lt;/a&gt; roles and how I plan to update my existing roles to fit this new structure.&lt;/p&gt;

&lt;h2 id=&quot;creating-the-cookiecutter-template&quot;&gt;Creating the Cookiecutter Template&lt;/h2&gt;

&lt;p&gt;Lately, I have been working on putting together a Cookiecutter template to use when creating new Ansible roles. &lt;a href=&quot;https://github.com/mrlesmithjr/cookiecutter-ansible-role&quot; title=&quot;https://github.com/mrlesmithjr/cookiecutter-ansible-role&quot;&gt;This&lt;/a&gt; template includes several key features:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Ansible Role Structure&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Continuous Integration (CI)&lt;/strong&gt;
    &lt;ul&gt;
      &lt;li&gt;GitHub Actions&lt;/li&gt;
      &lt;li&gt;GitLab CI/CD&lt;/li&gt;
      &lt;li&gt;Travis&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Molecule Testing&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Documentation&lt;/strong&gt;
    &lt;ul&gt;
      &lt;li&gt;Code of Conduct&lt;/li&gt;
      &lt;li&gt;Contributing Guidelines&lt;/li&gt;
      &lt;li&gt;License&lt;/li&gt;
      &lt;li&gt;Readme&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;benefits-of-a-consistent-structure&quot;&gt;Benefits of a Consistent Structure&lt;/h2&gt;

&lt;p&gt;Implementing a consistent structure has several advantages:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Ease of Use:&lt;/strong&gt; Developers can quickly start new projects with a familiar setup.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Maintainability:&lt;/strong&gt; Standardized practices make it easier to maintain and update roles.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Collaboration:&lt;/strong&gt; A standard structure improves collaboration among team members.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;challenges-and-solutions&quot;&gt;Challenges and Solutions&lt;/h2&gt;

&lt;p&gt;This sounds great for creating new Ansible roles, but what about the hundreds of existing roles I already have? How will I incorporate all of them into this same new structure?&lt;/p&gt;

&lt;h3 id=&quot;migration-strategy&quot;&gt;Migration Strategy&lt;/h3&gt;

&lt;p&gt;Here are some steps to consider for migrating existing roles:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;strong&gt;Assessment:&lt;/strong&gt; Evaluate the existing roles to understand the scope of changes needed.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Automation:&lt;/strong&gt; Use automation scripts to apply the new structure to existing roles.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Testing:&lt;/strong&gt; Ensure thorough testing to validate that the migrated roles function correctly.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;examples-and-implementation&quot;&gt;Examples and Implementation&lt;/h2&gt;

&lt;p&gt;In this example, I will be working in my &lt;a href=&quot;https://github.com/mrlesmithjr/ansible-control-machine.git&quot; title=&quot;https://github.com/mrlesmithjr/ansible-control-machine.git&quot;&gt;ansible-control-machine&lt;/a&gt; Ansible role.&lt;/p&gt;

&lt;p&gt;The first thing I will do is clone the project, but I will be cloning to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ansible-control-machine.orig&lt;/code&gt; directory:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;git clone git@github.com:mrlesmithjr/ansible-control-machine.git ansible-control-machine.orig
...
Cloning into &lt;span class=&quot;s1&quot;&gt;&apos;ansible-control-machine.orig&apos;&lt;/span&gt;...
remote: Enumerating objects: 20, &lt;span class=&quot;k&quot;&gt;done&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
remote: Counting objects: 100% &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;20/20&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;, &lt;span class=&quot;k&quot;&gt;done&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
remote: Compressing objects: 100% &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;15/15&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;, &lt;span class=&quot;k&quot;&gt;done&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
remote: Total 141 &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;delta 8&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;, reused 13 &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;delta 5&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;, pack-reused 121
Receiving objects: 100% &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;141/141&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;, 35.10 KiB | 1.67 MiB/s, &lt;span class=&quot;k&quot;&gt;done&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
Resolving deltas: 100% &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;50/50&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;, &lt;span class=&quot;k&quot;&gt;done&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
&lt;span class=&quot;sb&quot;&gt;```&lt;/span&gt;bash

Let&lt;span class=&quot;s1&quot;&gt;&apos;s check real quick to see what our original structure looked like:

```bash
ls -la ansible-control-machine.orig
...
total 32
drwxr-xr-x  14 larrysmithjr  staff   448 Feb 20 21:25 .
drwxr-xr-x  31 larrysmithjr  staff   992 Feb 20 21:31 ..
drwxr-xr-x  12 larrysmithjr  staff   384 Feb 20 21:25 .git
-rw-r--r--   1 larrysmithjr  staff  2427 Feb 20 21:25 .travis.yml
-rw-r--r--   1 larrysmithjr  staff  1486 Feb 20 21:25 .yamllint.yml
-rw-r--r--   1 larrysmithjr  staff  2050 Feb 20 21:25 README.md
drwxr-xr-x  15 larrysmithjr  staff   480 Feb 20 21:25 Vagrant
drwxr-xr-x   3 larrysmithjr  staff    96 Feb 20 21:25 defaults
drwxr-xr-x   3 larrysmithjr  staff    96 Feb 20 21:25 handlers
drwxr-xr-x   3 larrysmithjr  staff    96 Feb 20 21:25 meta
-rwxr-xr-x   1 larrysmithjr  staff   419 Feb 20 21:25 setup_travis_tests.sh
drwxr-xr-x   6 larrysmithjr  staff   192 Feb 20 21:25 tasks
drwxr-xr-x  17 larrysmithjr  staff   544 Feb 20 21:25 tests
drwxr-xr-x   3 larrysmithjr  staff    96 Feb 20 21:25 vars
```sql

Next, I will launch `cookiecutter` and use my [cookiecutter-ansible-role](https://github.com/mrlesmithjr/cookiecutter-ansible-role &quot;https://github.com/mrlesmithjr/cookiecutter-ansible-role&quot;) template to create a new project called `ansible-control-machine`.

```bash
cookiecutter https://github.com/mrlesmithjr/cookiecutter-ansible-role.git
```bash

Following the prompts, I will fill in the details.

```bash
role_name [Enter Ansible role name]: ansible-control-machine
description [Enter description of Ansible role]: Ansible role to build an Ansible control machine
author [Your Name]: Larry Smith Jr.
company [Enter company name]:
email [me@example.com]: mrlesmithjr@gmail.com
website [http://example.com]: http://everythingshouldbevirtual.com
twitter [example]: mrlesmithjr
Select license:
1 - MIT
2 - BSD-3
3 - Apache Software License 2.0
Choose from 1, 2, 3 [1]:
min_ansible_version [2.8]:
year [2020]:
github_username [Enter your GitHub username]: mrlesmithjr
travis_username [Enter your Travis CI username]: mrlesmithjr
Select default_ci_badges:
1 - Y
2 - N
Choose from 1, 2 [1]:
```bash

Now I should have a new directory called `ansible-control-machine`. So, let&apos;&lt;/span&gt;s see what the new structure looks like:

&lt;span class=&quot;sb&quot;&gt;```&lt;/span&gt;bash
&lt;span class=&quot;nb&quot;&gt;ls&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-la&lt;/span&gt; ansible-control-machine
...
total 96
drwxr-xr-x  24 larrysmithjr  staff   768 Feb 20 21:31 &lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
drwxr-xr-x  31 larrysmithjr  staff   992 Feb 20 21:31 ..
drwxr-xr-x   3 larrysmithjr  staff    96 Feb 20 21:31 .github
&lt;span class=&quot;nt&quot;&gt;-rw-r--r--&lt;/span&gt;   1 larrysmithjr  staff     6 Feb 20 21:31 .gitignore
&lt;span class=&quot;nt&quot;&gt;-rw-r--r--&lt;/span&gt;   1 larrysmithjr  staff   417 Feb 20 21:31 .gitlab-ci.yml
&lt;span class=&quot;nt&quot;&gt;-rw-r--r--&lt;/span&gt;   1 larrysmithjr  staff   271 Feb 20 21:31 .travis.yml
&lt;span class=&quot;nt&quot;&gt;-rw-r--r--&lt;/span&gt;   1 larrysmithjr  staff   617 Feb 20 21:31 .yamllint
&lt;span class=&quot;nt&quot;&gt;-rw-r--r--&lt;/span&gt;   1 larrysmithjr  staff  3356 Feb 20 21:31 CODE_OF_CONDUCT.md
&lt;span class=&quot;nt&quot;&gt;-rw-r--r--&lt;/span&gt;   1 larrysmithjr  staff   400 Feb 20 21:31 CONTRIBUTING.md
&lt;span class=&quot;nt&quot;&gt;-rw-r--r--&lt;/span&gt;   1 larrysmithjr  staff    40 Feb 20 21:31 CONTRIBUTORS.md
&lt;span class=&quot;nt&quot;&gt;-rw-r--r--&lt;/span&gt;   1 larrysmithjr  staff  1072 Feb 20 21:31 LICENSE.md
&lt;span class=&quot;nt&quot;&gt;-rw-r--r--&lt;/span&gt;   1 larrysmithjr  staff  1037 Feb 20 21:31 README.md
drwxr-xr-x   3 larrysmithjr  staff    96 Feb 20 21:31 defaults
drwxr-xr-x   3 larrysmithjr  staff    96 Feb 20 21:31 files
drwxr-xr-x   3 larrysmithjr  staff    96 Feb 20 21:31 handlers
drwxr-xr-x   3 larrysmithjr  staff    96 Feb 20 21:31 meta
drwxr-xr-x   5 larrysmithjr  staff   160 Feb 20 21:31 molecule
&lt;span class=&quot;nt&quot;&gt;-rw-r--r--&lt;/span&gt;   1 larrysmithjr  staff    87 Feb 20 21:31 playbook.yml
&lt;span class=&quot;nt&quot;&gt;-rw-r--r--&lt;/span&gt;   1 larrysmithjr  staff    37 Feb 20 21:31 requirements-dev.txt
&lt;span class=&quot;nt&quot;&gt;-rw-r--r--&lt;/span&gt;   1 larrysmithjr  staff    89 Feb 20 21:31 requirements.txt
&lt;span class=&quot;nt&quot;&gt;-rw-r--r--&lt;/span&gt;   1 larrysmithjr  staff     0 Feb 20 21:31 requirements.yml
drwxr-xr-x   3 larrysmithjr  staff    96 Feb 20 21:31 tasks
drwxr-xr-x   3 larrysmithjr  staff    96 Feb 20 21:31 templates
drwxr-xr-x   3 larrysmithjr  staff    96 Feb 20 21:31 vars
&lt;span class=&quot;sb&quot;&gt;```&lt;/span&gt;bash

As you can see, there is much more &lt;span class=&quot;k&quot;&gt;in &lt;/span&gt;the new structure than &lt;span class=&quot;k&quot;&gt;in &lt;/span&gt;the original.

Now is where the fun begins :&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;

Let&lt;span class=&quot;s1&quot;&gt;&apos;s change into the `ansible-control-machine` directory:

```bash
cd ansible-control-machine
```bash

Now let&apos;&lt;/span&gt;s &lt;span class=&quot;k&quot;&gt;do &lt;/span&gt;a quick &lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;git status&lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;:

&lt;span class=&quot;sb&quot;&gt;```&lt;/span&gt;bash
git status
...
fatal: not a git repository &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;or any of the parent directories&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;: .git
&lt;span class=&quot;sb&quot;&gt;```&lt;/span&gt;bash

Oh no! Where is my repo info? The answer is there has yet to be any &lt;span class=&quot;k&quot;&gt;in &lt;/span&gt;the new structure. Let&lt;span class=&quot;s1&quot;&gt;&apos;s see how we can get that back into our new structure.

We will do that by simply copying the .git directory from our original project, which will bring all that to our new directory.

```bash
cp -Rv ../ansible-control-machine.orig/.git .
...
../ansible-control-machine.orig/.git -&amp;gt; ./.git
../ansible-control-machine.orig/.git/config -&amp;gt; ./.git/config
../ansible-control-machine.orig/.git/objects -&amp;gt; ./.git/objects
../ansible-control-machine.orig/.git/objects/pack -&amp;gt; ./.git/objects/pack
../ansible-control-machine.orig/.git/objects/pack/pack-35613ed67c65538902f0322dc253bcc6b19acd31.pack -&amp;gt; ./.git/objects/pack/pack-35613ed67c65538902f0322dc253bcc6b19acd31.pack
../ansible-control-machine.orig/.git/objects/pack/pack-35613ed67c65538902f0322dc253bcc6b19acd31.idx -&amp;gt; ./.git/objects/pack/pack-35613ed67c65538902f0322dc253bcc6b19acd31.idx
../ansible-control-machine.orig/.git/objects/info -&amp;gt; ./.git/objects/info
../ansible-control-machine.orig/.git/HEAD -&amp;gt; ./.git/HEAD
../ansible-control-machine.orig/.git/info -&amp;gt; ./.git/info
../ansible-control-machine.orig/.git/info/exclude -&amp;gt; ./.git/info/exclude
../ansible-control-machine.orig/.git/logs -&amp;gt; ./.git/logs
../ansible-control-machine.orig/.git/logs/HEAD -&amp;gt; ./.git/logs/HEAD
../ansible-control-machine.orig/.git/logs/refs -&amp;gt; ./.git/logs/refs
../ansible-control-machine.orig/.git/logs/refs/heads -&amp;gt; ./.git/logs/refs/heads
../ansible-control-machine.orig/.git/logs/refs/heads/master -&amp;gt; ./.git/logs/refs/heads/master
../ansible-control-machine.orig/.git/logs/refs/remotes -&amp;gt; ./.git/logs/refs/remotes
../ansible-control-machine.orig/.git/logs/refs/remotes/origin -&amp;gt; ./.git/logs/refs/remotes/origin
../ansible-control-machine.orig/.git/logs/refs/remotes/origin/HEAD -&amp;gt; ./.git/logs/refs/remotes/origin/HEAD
../ansible-control-machine.orig/.git/description -&amp;gt; ./.git/description
../ansible-control-machine.orig/.git/hooks -&amp;gt; ./.git/hooks
../ansible-control-machine.orig/.git/hooks/commit-msg.sample -&amp;gt; ./.git/hooks/commit-msg.sample
../ansible-control-machine.orig/.git/hooks/pre-rebase.sample -&amp;gt; ./.git/hooks/pre-rebase.sample
../ansible-control-machine.orig/.git/hooks/pre-commit.sample -&amp;gt; ./.git/hooks/pre-commit.sample
../ansible-control-machine.orig/.git/hooks/applypatch-msg.sample -&amp;gt; ./.git/hooks/applypatch-msg.sample
../ansible-control-machine.orig/.git/hooks/fsmonitor-watchman.sample -&amp;gt; ./.git/hooks/fsmonitor-watchman.sample
../ansible-control-machine.orig/.git/hooks/pre-receive.sample -&amp;gt; ./.git/hooks/pre-receive.sample
../ansible-control-machine.orig/.git/hooks/prepare-commit-msg.sample -&amp;gt; ./.git/hooks/prepare-commit-msg.sample
../ansible-control-machine.orig/.git/hooks/post-update.sample -&amp;gt; ./.git/hooks/post-update.sample
../ansible-control-machine.orig/.git/hooks/pre-merge-commit.sample -&amp;gt; ./.git/hooks/pre-merge-commit.sample
../ansible-control-machine.orig/.git/hooks/pre-applypatch.sample -&amp;gt; ./.git/hooks/pre-applypatch.sample
../ansible-control-machine.orig/.git/hooks/pre-push.sample -&amp;gt; ./.git/hooks/pre-push.sample
../ansible-control-machine.orig/.git/hooks/update.sample -&amp;gt; ./.git/hooks/update.sample
../ansible-control-machine.orig/.git/refs -&amp;gt; ./.git/refs
../ansible-control-machine.orig/.git/refs/heads -&amp;gt; ./.git/refs/heads
../ansible-control-machine.orig/.git/refs/heads/master -&amp;gt; ./.git/refs/heads/master
../ansible-control-machine.orig/.git/refs/tags -&amp;gt; ./.git/refs/tags
../ansible-control-machine.orig/.git/refs/remotes -&amp;gt; ./.git/refs/remotes
../ansible-control-machine.orig/.git/refs/remotes/origin -&amp;gt; ./.git/refs/remotes/origin
../ansible-control-machine.orig/.git/refs/remotes/origin/HEAD -&amp;gt; ./.git/refs/remotes/origin/HEAD
../ansible-control-machine.orig/.git/index -&amp;gt; ./.git/index
../ansible-control-machine.orig/.git/packed-refs -&amp;gt; ./.git/packed-refs
```bash

Once the copy is complete, we can do another `git status` to see how things look now:

```bash
git status
...
On branch master
Your branch is up to date with &apos;&lt;/span&gt;origin/master&lt;span class=&quot;s1&quot;&gt;&apos;.

Changes not staged for commit:
  (use &quot;git add/rm &amp;lt;file&amp;gt;...&quot; to update what will be committed)
  (use &quot;git restore &amp;lt;file&amp;gt;...&quot; to discard changes in working directory)
 modified:   .travis.yml
 deleted:    .yamllint.yml
 modified:   README.md
 deleted:    Vagrant/.gitignore
 deleted:    Vagrant/Vagrantfile
 deleted:    Vagrant/ansible.cfg
 deleted:    Vagrant/bootstrap.sh
 deleted:    Vagrant/bootstrap.yml
 deleted:    Vagrant/cleanup.bat
 deleted:    Vagrant/cleanup.sh
 deleted:    Vagrant/hosts
 deleted:    Vagrant/nodes.yml
 deleted:    Vagrant/playbook.yml
 deleted:    Vagrant/prep.sh
 deleted:    Vagrant/requirements.yml
 deleted:    Vagrant/roles/ansible-control-machine
 modified:   defaults/main.yml
 modified:   meta/main.yml
 deleted:    setup_travis_tests.sh
 deleted:    tasks/debian.yml
 modified:   tasks/main.yml
 deleted:    tasks/redhat.yml
 deleted:    tasks/setup.yml
 deleted:    tests/.ansible-lint
 deleted:    tests/Dockerfile.centos-7
 deleted:    tests/Dockerfile.debian-jessie
 deleted:    tests/Dockerfile.debian-stretch
 deleted:    tests/Dockerfile.fedora-24
 deleted:    tests/Dockerfile.fedora-25
 deleted:    tests/Dockerfile.fedora-26
 deleted:    tests/Dockerfile.fedora-27
 deleted:    tests/Dockerfile.fedora-28
 deleted:    tests/Dockerfile.fedora-29
 deleted:    tests/Dockerfile.ubuntu-bionic
 deleted:    tests/Dockerfile.ubuntu-trusty
 deleted:    tests/Dockerfile.ubuntu-xenial
 deleted:    tests/inventory
 deleted:    tests/test.yml

Untracked files:
  (use &quot;git add &amp;lt;file&amp;gt;...&quot; to include in what will be committed)
 .github/workflows/default.yml
 .gitignore
 .gitlab-ci.yml
 .yamllint
 CODE_OF_CONDUCT.md
 CONTRIBUTING.md
 CONTRIBUTORS.md
 LICENSE.md
 files/.gitkeep
 molecule/default/Dockerfile.j2
 molecule/default/INSTALL.rst
 molecule/default/molecule.yml
 molecule/shared/converge.yml
 molecule/shared/tests/test_default.py
 molecule/vagrant/INSTALL.rst
 molecule/vagrant/molecule.yml
 molecule/vagrant/prepare.yml
 playbook.yml
 requirements-dev.txt
 requirements.txt
 requirements.yml
 templates/.gitkeep

no changes added to commit (use &quot;git add&quot; and/or &quot;git commit -a&quot;)
```sql

Well, that looks better but scary. Of course not! Now, all we need to do is decide what we want to keep and what we want to get rid of. But before we do that, let&apos;&lt;/span&gt;s create a new branch so we are not messing with &lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;master&lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt; By doing this, we are ensuring that we don&lt;span class=&quot;s1&quot;&gt;&apos;t mess anything up in `master` in case we do something wrong.

So, let&apos;&lt;/span&gt;s create a new branch called &lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;updating-structure&lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;:

&lt;span class=&quot;sb&quot;&gt;```&lt;/span&gt;bash
git checkout &lt;span class=&quot;nt&quot;&gt;-b&lt;/span&gt; updating-structure
...
Switched to a new branch &lt;span class=&quot;s1&quot;&gt;&apos;updating-structure&apos;&lt;/span&gt;
&lt;span class=&quot;sb&quot;&gt;```&lt;/span&gt;bash

Now that we are &lt;span class=&quot;k&quot;&gt;in &lt;/span&gt;our &lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;updating-structure&lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt; branch, we can start by checking out anything marked as deleted that we want to keep.

&lt;span class=&quot;sb&quot;&gt;```&lt;/span&gt;bash
git status | &lt;span class=&quot;nb&quot;&gt;grep &lt;/span&gt;deleted
...
 deleted:    .yamllint.yml
 deleted:    Vagrant/.gitignore
 deleted:    Vagrant/Vagrantfile
 deleted:    Vagrant/ansible.cfg
 deleted:    Vagrant/bootstrap.sh
 deleted:    Vagrant/bootstrap.yml
 deleted:    Vagrant/cleanup.bat
 deleted:    Vagrant/cleanup.sh
 deleted:    Vagrant/hosts
 deleted:    Vagrant/nodes.yml
 deleted:    Vagrant/playbook.yml
 deleted:    Vagrant/prep.sh
 deleted:    Vagrant/requirements.yml
 deleted:    Vagrant/roles/ansible-control-machine
 deleted:    setup_travis_tests.sh
 deleted:    tasks/debian.yml
 deleted:    tasks/redhat.yml
 deleted:    tasks/setup.yml
 deleted:    tests/.ansible-lint
 deleted:    tests/Dockerfile.centos-7
 deleted:    tests/Dockerfile.debian-jessie
 deleted:    tests/Dockerfile.debian-stretch
 deleted:    tests/Dockerfile.fedora-24
 deleted:    tests/Dockerfile.fedora-25
 deleted:    tests/Dockerfile.fedora-26
 deleted:    tests/Dockerfile.fedora-27
 deleted:    tests/Dockerfile.fedora-28
 deleted:    tests/Dockerfile.fedora-29
 deleted:    tests/Dockerfile.ubuntu-bionic
 deleted:    tests/Dockerfile.ubuntu-trusty
 deleted:    tests/Dockerfile.ubuntu-xenial
 deleted:    tests/inventory
 deleted:    tests/test.yml
&lt;span class=&quot;sb&quot;&gt;```&lt;/span&gt;bash

Because this is &lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;an Ansible role, I want to keep anything &lt;span class=&quot;k&quot;&gt;in&lt;/span&gt; &lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;tasks&lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt; from the above. So I&lt;span class=&quot;s1&quot;&gt;&apos;ll `checkout` those to keep them.

```bash
git checkout tasks
...
Updated 4 paths from the index
```bash

Let&apos;&lt;/span&gt;s check our &lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;deleted&lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt; files once more to make sure we are good:

&lt;span class=&quot;sb&quot;&gt;```&lt;/span&gt;bash
git status | &lt;span class=&quot;nb&quot;&gt;grep &lt;/span&gt;deleted
...
 deleted:    .yamllint.yml
 deleted:    Vagrant/.gitignore
 deleted:    Vagrant/Vagrantfile
 deleted:    Vagrant/ansible.cfg
 deleted:    Vagrant/bootstrap.sh
 deleted:    Vagrant/bootstrap.yml
 deleted:    Vagrant/cleanup.bat
 deleted:    Vagrant/cleanup.sh
 deleted:    Vagrant/hosts
 deleted:    Vagrant/nodes.yml
 deleted:    Vagrant/playbook.yml
 deleted:    Vagrant/prep.sh
 deleted:    Vagrant/requirements.yml
 deleted:    Vagrant/roles/ansible-control-machine
 deleted:    setup_travis_tests.sh
 deleted:    tests/.ansible-lint
 deleted:    tests/Dockerfile.centos-7
 deleted:    tests/Dockerfile.debian-jessie
 deleted:    tests/Dockerfile.debian-stretch
 deleted:    tests/Dockerfile.fedora-24
 deleted:    tests/Dockerfile.fedora-25
 deleted:    tests/Dockerfile.fedora-26
 deleted:    tests/Dockerfile.fedora-27
 deleted:    tests/Dockerfile.fedora-28
 deleted:    tests/Dockerfile.fedora-29
 deleted:    tests/Dockerfile.ubuntu-bionic
 deleted:    tests/Dockerfile.ubuntu-trusty
 deleted:    tests/Dockerfile.ubuntu-xenial
 deleted:    tests/inventory
 deleted:    tests/test.yml
&lt;span class=&quot;sb&quot;&gt;```&lt;/span&gt;bash

It looks good. So, now I will get rid of the &lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;Vagrant&lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt; and &lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;tests&lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt; directories because I know these are &lt;span class=&quot;k&quot;&gt;for &lt;/span&gt;testing, and I&lt;span class=&quot;s1&quot;&gt;&apos;ll be replacing them with the new Molecule tests.

```bash
git add Vagrant/ tests/
```bash

And another quick `git status` shows:

```bash
On branch updating-structure
Changes to be committed:
  (use &quot;git restore --staged &amp;lt;file&amp;gt;...&quot; to unstage)
 deleted:    Vagrant/.gitignore
 deleted:    Vagrant/Vagrantfile
 deleted:    Vagrant/ansible.cfg
 deleted:    Vagrant/bootstrap.sh
 deleted:    Vagrant/bootstrap.yml
 deleted:    Vagrant/cleanup.bat
 deleted:    Vagrant/cleanup.sh
 deleted:    Vagrant/hosts
 deleted:    Vagrant/nodes.yml
 deleted:    Vagrant/playbook.yml
 deleted:    Vagrant/prep.sh
 deleted:    Vagrant/requirements.yml
 deleted:    Vagrant/roles/ansible-control-machine
 deleted:    tests/.ansible-lint
 deleted:    tests/Dockerfile.centos-7
 deleted:    tests/Dockerfile.debian-jessie
 deleted:    tests/Dockerfile.debian-stretch
 deleted:    tests/Dockerfile.fedora-24
 deleted:    tests/Dockerfile.fedora-25
 deleted:    tests/Dockerfile.fedora-26
 deleted:    tests/Dockerfile.fedora-27
 deleted:    tests/Dockerfile.fedora-28
 deleted:    tests/Dockerfile.fedora-29
 deleted:    tests/Dockerfile.ubuntu-bionic
 deleted:    tests/Dockerfile.ubuntu-trusty
 deleted:    tests/Dockerfile.ubuntu-xenial
 deleted:    tests/inventory
 deleted:    tests/test.yml

Changes not staged for commit:
  (use &quot;git add/rm &amp;lt;file&amp;gt;...&quot; to update what will be committed)
  (use &quot;git restore &amp;lt;file&amp;gt;...&quot; to discard changes in working directory)
 modified:   .travis.yml
 deleted:    .yamllint.yml
 modified:   README.md
 modified:   defaults/main.yml
 modified:   meta/main.yml
 deleted:    setup_travis_tests.sh

Untracked files:
  (use &quot;git add &amp;lt;file&amp;gt;...&quot; to include in what will be committed)
 .github/workflows/default.yml
 .gitignore
 .gitlab-ci.yml
 .yamllint
 CODE_OF_CONDUCT.md
 CONTRIBUTING.md
 CONTRIBUTORS.md
 LICENSE.md
 files/.gitkeep
 molecule/default/Dockerfile.j2
 molecule/default/INSTALL.rst
 molecule/default/molecule.yml
 molecule/shared/converge.yml
 molecule/shared/tests/test_default.py
 molecule/vagrant/INSTALL.rst
 molecule/vagrant/molecule.yml
 molecule/vagrant/prepare.yml
 playbook.yml
 requirements-dev.txt
 requirements.txt
 requirements.yml
 templates/.gitkeep
```bash

I&apos;&lt;/span&gt;m happy with this, so I&lt;span class=&quot;s1&quot;&gt;&apos;ll now commit those changes:

```bash
git commit -m &quot;Deleted old tests, etc. not needed&quot;
...
[updating-structure a117fcf] Deleted old tests, etc. not needed
 28 files changed, 1202 deletions(-)
 delete mode 100644 Vagrant/.gitignore
 delete mode 100644 Vagrant/Vagrantfile
 delete mode 100644 Vagrant/ansible.cfg
 delete mode 100755 Vagrant/bootstrap.sh
 delete mode 100644 Vagrant/bootstrap.yml
 delete mode 100644 Vagrant/cleanup.bat
 delete mode 100755 Vagrant/cleanup.sh
 delete mode 120000 Vagrant/hosts
 delete mode 100644 Vagrant/nodes.yml
 delete mode 100644 Vagrant/playbook.yml
 delete mode 100755 Vagrant/prep.sh
 delete mode 100644 Vagrant/requirements.yml
 delete mode 120000 Vagrant/roles/ansible-control-machine
 delete mode 100644 tests/.ansible-lint
 delete mode 100644 tests/Dockerfile.centos-7
 delete mode 100644 tests/Dockerfile.debian-jessie
 delete mode 100644 tests/Dockerfile.debian-stretch
 delete mode 100644 tests/Dockerfile.fedora-24
 delete mode 100644 tests/Dockerfile.fedora-25
 delete mode 100644 tests/Dockerfile.fedora-26
 delete mode 100644 tests/Dockerfile.fedora-27
 delete mode 100644 tests/Dockerfile.fedora-28
 delete mode 100644 tests/Dockerfile.fedora-29
 delete mode 100644 tests/Dockerfile.ubuntu-bionic
 delete mode 100644 tests/Dockerfile.ubuntu-trusty
 delete mode 100644 tests/Dockerfile.ubuntu-xenial
 delete mode 100644 tests/inventory
 delete mode 100644 tests/test.yml
```bash

Now, I can start reviewing the changes made to the files marked as `modified`. So, I&apos;&lt;/span&gt;ll first check to see which files were modified:

&lt;span class=&quot;sb&quot;&gt;```&lt;/span&gt;bash
git status | &lt;span class=&quot;nb&quot;&gt;grep &lt;/span&gt;modified
...
 modified:   .travis.yml
 modified:   README.md
 modified:   defaults/main.yml
 modified:   meta/main.yml
&lt;span class=&quot;sb&quot;&gt;```&lt;/span&gt;bash

As we can see, only four files have been modified. So, I&lt;span class=&quot;s1&quot;&gt;&apos;ll take my time and go through each of those files with whatever editor I choose to see what to keep and what to discard. I personally use VSCode for this, as it makes it really easy to discard or keep any modifications.

Once I am done with the modified files, I&apos;&lt;/span&gt;ll also add/commit to those.

&lt;span class=&quot;sb&quot;&gt;```&lt;/span&gt;bash
git status
...
On branch updating-structure
Changes to be committed:
  &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;use &lt;span class=&quot;s2&quot;&gt;&quot;git restore --staged &amp;lt;file&amp;gt;...&quot;&lt;/span&gt; to unstage&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
 modified:   .travis.yml
 modified:   README.md
 modified:   meta/main.yml

Untracked files:
  &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;use &lt;span class=&quot;s2&quot;&gt;&quot;git add &amp;lt;file&amp;gt;...&quot;&lt;/span&gt; to include &lt;span class=&quot;k&quot;&gt;in &lt;/span&gt;what will be committed&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
 .github/workflows/default.yml
 .gitignore
 .gitlab-ci.yml
 .yamllint
 CODE_OF_CONDUCT.md
 CONTRIBUTING.md
 CONTRIBUTORS.md
 LICENSE.md
 files/.gitkeep
 molecule/default/Dockerfile.j2
 molecule/default/INSTALL.rst
 molecule/default/molecule.yml
 molecule/shared/converge.yml
 molecule/shared/tests/test_default.py
 molecule/vagrant/INSTALL.rst
 molecule/vagrant/molecule.yml
 molecule/vagrant/prepare.yml
 playbook.yml
 requirements-dev.txt
 requirements.txt
 requirements.yml
 templates/.gitkeep
&lt;span class=&quot;sb&quot;&gt;```&lt;/span&gt;bash

&lt;span class=&quot;sb&quot;&gt;```&lt;/span&gt;bash
git commit &lt;span class=&quot;nt&quot;&gt;-m&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Updated files, etc. after new structure&quot;&lt;/span&gt;
...
 3 files changed, 58 insertions&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;+&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;, 173 deletions&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;-&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
 rewrite .travis.yml &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;93%&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
 rewrite README.md &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;87%&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;sb&quot;&gt;```&lt;/span&gt;bash

The final step is to go ahead and add the remaining &lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;untracked files&lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt; As these would be new files that are part of my new desired structure, I&lt;span class=&quot;s1&quot;&gt;&apos;ll also want those to be committed.

We can add all `untracked files` by:

```bash
git add .
```bash

```bash
git status
...
On branch updating-structure
Changes to be committed:
  (use &quot;git restore --staged &amp;lt;file&amp;gt;...&quot; to unstage)
 new file:   .github/workflows/default.yml
 new file:   .gitignore
 new file:   .gitlab-ci.yml
 new file:   .yamllint
 new file:   CODE_OF_CONDUCT.md
 new file:   CONTRIBUTING.md
 new file:   CONTRIBUTORS.md
 new file:   LICENSE.md
 new file:   files/.gitkeep
 new file:   molecule/default/Dockerfile.j2
 new file:   molecule/default/INSTALL.rst
 new file:   molecule/default/molecule.yml
 new file:   molecule/shared/converge.yml
 new file:   molecule/shared/tests/test_default.py
 new file:   molecule/vagrant/INSTALL.rst
 new file:   molecule/vagrant/molecule.yml
 new file:   molecule/vagrant/prepare.yml
 new file:   playbook.yml
 new file:   requirements-dev.txt
 new file:   requirements.txt
 new file:   requirements.yml
 new file:   templates/.gitkeep
```bash

Now, we can commit these as well:

```bash
git commit -m &quot;New files, etc. from new structure&quot;
...
[updating-structure 9b5a5f2] New files, etc. from new structure
 22 files changed, 431 insertions(+)
 create mode 100644 .github/workflows/default.yml
 create mode 100644 .gitignore
 create mode 100644 .gitlab-ci.yml
 create mode 100644 .yamllint
 create mode 100644 CODE_OF_CONDUCT.md
 create mode 100644 CONTRIBUTING.md
 create mode 100644 CONTRIBUTORS.md
 create mode 100644 LICENSE.md
 create mode 100644 files/.gitkeep
 create mode 100644 molecule/default/Dockerfile.j2
 create mode 100644 molecule/default/INSTALL.rst
 create mode 100644 molecule/default/molecule.yml
 create mode 100644 molecule/shared/converge.yml
 create mode 100644 molecule/shared/tests/test_default.py
 create mode 100644 molecule/vagrant/INSTALL.rst
 create mode 100644 molecule/vagrant/molecule.yml
 create mode 100644 molecule/vagrant/prepare.yml
 create mode 100644 playbook.yml
 create mode 100644 requirements-dev.txt
 create mode 100644 requirements.txt
 create mode 100644 requirements.yml
 create mode 100644 templates/.gitkeep
```bash

Now, I am updating my project with a new desired structure without losing anything other than what I intended. So, you can now push those changes up to the `updating-structure` branch.

Let&apos;&lt;/span&gt;s ensure our &lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;git remote&lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt; is still &lt;span class=&quot;k&quot;&gt;in &lt;/span&gt;place before doing so:

&lt;span class=&quot;sb&quot;&gt;```&lt;/span&gt;bash
git remote &lt;span class=&quot;nt&quot;&gt;-v&lt;/span&gt;
...
origin git@github.com:mrlesmithjr/ansible-control-machine.git &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;fetch&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
origin git@github.com:mrlesmithjr/ansible-control-machine.git &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;push&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;sb&quot;&gt;```&lt;/span&gt;bash

Awesome! So we are good to go and can now push them up.

&lt;span class=&quot;sb&quot;&gt;```&lt;/span&gt;bash
git push
...
Enumerating objects: 43, &lt;span class=&quot;k&quot;&gt;done&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
Counting objects: 100% &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;43/43&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;, &lt;span class=&quot;k&quot;&gt;done&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
Delta compression using up to 8 threads
Compressing objects: 100% &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;29/29&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;, &lt;span class=&quot;k&quot;&gt;done&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
Writing objects: 100% &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;38/38&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;, 8.15 KiB | 2.72 MiB/s, &lt;span class=&quot;k&quot;&gt;done&lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
Total 38 &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;delta 4&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;, reused 0 &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;delta 0&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
remote: Resolving deltas: 100% &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;4/4&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;, completed with 2 &lt;span class=&quot;nb&quot;&gt;local &lt;/span&gt;objects.
remote:
remote: Create a pull request &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;updating-structure&apos;&lt;/span&gt; on GitHub by visiting:
remote:      https://github.com/mrlesmithjr/ansible-control-machine/pull/new/updating-structure
remote:
To github.com:mrlesmithjr/ansible-control-machine.git
 &lt;span class=&quot;k&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;new branch]      updating-structure -&amp;gt; updating-structure
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Once that is completed, you can start testing and resolving any issues you may find from any CI tests if enabled. In my case, I have pushed to GitHub, and I have a GitHub Actions workflow that should kick off. Remember, this is something I included in the Cookiecutter template.&lt;/p&gt;

&lt;p&gt;Finally, once you are happy with the state of your new updating-structure branch, you can create a Pull Request to merge the changes into your master branch. I want to stress once again that we have not touched our master branch at all, so it will remain with its original structure until the Pull Request is merged.&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;This is an excellent exercise to get the new structure in place. However, initially, it can be daunting, as you may be worried about causing issues. But if you follow these steps, you should be just fine! So, there you have it. I’d love to hear from others on how they go about these types of scenarios and your experiences, so feel free to leave feedback.&lt;/p&gt;

&lt;p&gt;Enjoy!&lt;/p&gt;</content><author><name>Larry Smith Jr.</name></author><summary type="html">Introduction</summary></entry><entry><title type="html">Manager or Leader</title><link href="https://everythingshouldbevirtual.com/Manager-or-Leader/" rel="alternate" type="text/html" title="Manager or Leader" /><published>2019-11-19T07:45:00-05:00</published><updated>2019-11-19T07:45:00-05:00</updated><id>https://everythingshouldbevirtual.com/Manager-or-Leader</id><content type="html" xml:base="https://everythingshouldbevirtual.com/Manager-or-Leader/">&lt;p&gt;&lt;strong&gt;Are you a manager or leader?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Say what?&lt;/p&gt;

&lt;p&gt;This subject has been on my mind for many, many years. Some may argue that these roles are the same, but they are not. You cannot have one without the other, but they should not be viewed as the same. Some of the information I am about to share is purely from my perspective and may need to be received better by some. This is not my intent; it is all about sparking conversation and making people think.&lt;/p&gt;

&lt;p&gt;What inspired me to write this? Well, many different things over the past few years have, but most recently, this Twitter post inspired me to put this into words.&lt;/p&gt;

&lt;blockquote class=&quot;twitter-tweet&quot; data-partner=&quot;tweetdeck&quot;&gt;&lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;IMHO, a true leader should only be visible within the team you are leading. Outside of your team however, the leader should not easily be recognized as your team should be empowered and on the same page. If the leader were to leave, that team should pick right up!&lt;/p&gt;&amp;mdash; Larry Smith Jr. (@mrlesmithjr) &lt;a href=&quot;https://twitter.com/mrlesmithjr/status/1196822184783732736?ref_src=twsrc%5Etfw&quot;&gt;November 19, 2019&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async=&quot;&quot; src=&quot;https://platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;

&lt;h2 id=&quot;background&quot;&gt;Background&lt;/h2&gt;

&lt;p&gt;I was raised in a military family, as many of us are. My father taught me things paramount to shaping me somewhat differently than most. My father taught me to be a leader by example (as most fathers/mothers do). However, he also taught me you do not need to be in the spotlight to be a true leader. Treat all people equally, and never put yourself above others. I remember a story he once told me about how he was promoted to foreman (General Motors). He initially thought it would be great but soon discovered it was not because of his peers trying to embed into him that he was now one of them and should treat his former peers differently than he did. After about a week, he said he refused to do that, wanted to be the foreman no longer, and would rather be with his people. My father was a leader to the utmost degree until he passed away almost five years ago. In my eyes, he is still a leader.&lt;/p&gt;

&lt;p&gt;Another thing he taught me was to respect titles but to take them with a grain of salt. Titles should never dictate a conversation or persuade you in your decision(s). This has stuck with me throughout my life.&lt;/p&gt;

&lt;h2 id=&quot;my-career&quot;&gt;My Career&lt;/h2&gt;

&lt;p&gt;Throughout my career, I have been approached many times over the past ten years or so about taking on a management role. Anyone who knows me where this has been discussed already knows my answer.&lt;/p&gt;

&lt;p&gt;“No way in hell, not for me.”&lt;/p&gt;

&lt;p&gt;So, in most cases, it was agreed that I would function in a mentor/advisory role. I have spent many years mentoring others to be empowered and have a voice. I have worked equally in this capacity with my immediate team(s) and management team(s). I am notorious for having chat sessions with others while in meetings, guiding them in what they might want to say or do based on the context of the discussion. I had an answer about whether I was providing others with guidance. So, why would I not be the one to speak up and give the answer and be the leader? The answer is simple: empower those around you to make their voices heard. This builds confidence in others and empowers them. It also means that I am learning from others because they may interpret something differently in their message delivery. I feel successful when I see others succeed.&lt;/p&gt;

&lt;p&gt;This could be interpreted as the role of a manager. I disagree, and now I will digress.&lt;/p&gt;

&lt;h2 id=&quot;manager-roles&quot;&gt;Manager Roles&lt;/h2&gt;

&lt;p&gt;What does a manager role mean to me?&lt;/p&gt;

&lt;p&gt;Work with me here, and please do not take offense, as none is intended.&lt;/p&gt;

&lt;p&gt;A manager should be the go-to person and the face of the team they represent. They do not need to be technical, and I prefer they not to be. They need to have their team’s back in their time of need and be the ones who are put into the spotlight. I view a manager as a dictator (not in the bad sense). They dictate what is being asked to be done and ultimately should be accountable for things getting done.&lt;/p&gt;

&lt;p&gt;In most cases, they may or may not know what needs to be done to deliver. They should also be open to the fact that their team may disagree with them and let the team decide the right course of action. I learned many years ago that your job is to make the manager look good! This translates to the fact that the guidance given to your manager should be in the best interest of everyone.&lt;/p&gt;

&lt;p&gt;Remember the part where I mentioned that titles should not dictate your conversations or decisions? Yeah, this is where it is essential. It is not a means to disrespect your manager but to put them on the same level playing field. A good manager will respect this and be willing to allow this. But they must understand it is about having their best interest and making them look good! Remember the part about feeling successful by seeing others being successful? Yeah, this is part of that.&lt;/p&gt;

&lt;h2 id=&quot;leader-roles&quot;&gt;Leader Roles&lt;/h2&gt;

&lt;p&gt;What does a leadership role mean to me?&lt;/p&gt;

&lt;p&gt;A leader leads by example and does not feel the need to stand out from their peers. A leader is different from the go-to person in the direction of how things are moving or the face of their team. They are, however, the ones who inspire and empower others to do great things. A leader may only be recognized by their immediate peers, but they should only be viewed as part of the team outside of them. Remember the Twitter post I linked to above? The leader should refrain from standing out externally to their team. A true leader will bring those around them; the team should deliver the same message. Now, their manager may recognize who the leader is, but outside of that, no one should know. A true leader should never feel they belong in the spotlight. True leaders should have their whole team in the spotlight. Often, true leaders may or may not even recognize that they are. If you realize it, be humble and never use it to take the upper hand. Use it in a way that empowers those around you.&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;I am very passionate about this topic. Am I right in my opinions here? Probably not, as they are just that: my opinions. However, these are great topics for discussion.&lt;/p&gt;

&lt;p&gt;I also put this tweet out a week or so ago. And I truly meant it! And I also received numerous amazing messages from folks! And for anyone reading this post, feel free to reach out!&lt;/p&gt;

&lt;blockquote class=&quot;twitter-tweet&quot; data-partner=&quot;tweetdeck&quot;&gt;&lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;I&amp;#39;m at that point that my day to day work I do is no longer enjoyable without being more of a mentor. Problem is I need more willing mentees available. If you or someone you know is interested, reach out to me.&lt;/p&gt;&amp;mdash; Larry Smith Jr. (@mrlesmithjr) &lt;a href=&quot;https://twitter.com/mrlesmithjr/status/1194025732017704967?ref_src=twsrc%5Etfw&quot;&gt;November 11, 2019&lt;/a&gt;&lt;/blockquote&gt;
&lt;script async=&quot;&quot; src=&quot;https://platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;

&lt;p&gt;Enjoy!&lt;/p&gt;</content><author><name>Larry Smith Jr.</name></author><summary type="html">Are you a manager or leader?</summary></entry><entry><title type="html">CFD6 - Hashicorp</title><link href="https://everythingshouldbevirtual.com/CFD6-Hashicorp/" rel="alternate" type="text/html" title="CFD6 - Hashicorp" /><published>2019-10-13T16:28:00-04:00</published><updated>2019-10-13T16:28:00-04:00</updated><id>https://everythingshouldbevirtual.com/CFD6-Hashicorp</id><content type="html" xml:base="https://everythingshouldbevirtual.com/CFD6-Hashicorp/">&lt;p&gt;Recently while attending Cloud Field Day 6, one of the companies presenting just so happened to be Hashicorp. Now Hashicorp is one of my
personal favorite companies in the open-source world. So, to say that I
was extremely excited to hear them would be an understatement. Luckily
for us, Mitchell Hashimoto was the one who presented to all of us delegates as
I am sure everyone was excited about this.&lt;/p&gt;

&lt;p&gt;Hashicorp spent a bit of time highlighting some of their products but their main focus for the presentation was around Consul. Why Consul? Because as Mitchell mentioned, networking has a huge bullseye on it’s back and cloud is coming for
you! :) Which is where Consul comes into play.&lt;/p&gt;

&lt;!-- Courtesy of embedresponsively.com //--&gt;

&lt;div class=&quot;responsive-video-container&quot;&gt;
    &lt;iframe src=&quot;https://www.youtube-nocookie.com/embed/VML6w2Vj9Ws&quot; frameborder=&quot;0&quot; webkitallowfullscreen=&quot;&quot; mozallowfullscreen=&quot;&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;
  &lt;/div&gt;

&lt;h2 id=&quot;consul-for-service-networking&quot;&gt;Consul For Service Networking&lt;/h2&gt;

&lt;!-- Courtesy of embedresponsively.com //--&gt;

&lt;div class=&quot;responsive-video-container&quot;&gt;
    &lt;iframe src=&quot;https://www.youtube-nocookie.com/embed/TWthJXrDiis&quot; frameborder=&quot;0&quot; webkitallowfullscreen=&quot;&quot; mozallowfullscreen=&quot;&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;
  &lt;/div&gt;

&lt;h3 id=&quot;multi-cloud-multiple-technologies---call-me-when-you-have-a-good-idea&quot;&gt;Multi-Cloud (Multiple Technologies) - Call Me When You Have a Good Idea&lt;/h3&gt;

&lt;p&gt;In the beginning of this segment Mitchell talks about a multi-cloud conversation
he had with an analyst a few years ago and they said for him to call them back
when he had a good idea. Ummmm…Someone &lt;strong&gt;SHOULD&lt;/strong&gt; have listened to him!&lt;/p&gt;

&lt;p&gt;As Mitchell explains in regards to the diagram below, the technologies defined
are reality and with TCP, they can all co-exist. Makes sense obviously.
&lt;img src=&quot;../../images/2019/10/Consul-Mixed-Technologies.png&quot; alt=&quot;Mixed Technologies&quot; /&gt;&lt;/p&gt;

&lt;p&gt;And through organic growth in a datacenter, things begin to get complex as seen
in the diagram below.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;../../images/2019/10/Consul-Modern-Datacenter-Challenges.png&quot; alt=&quot;Modern Datacenter Challenges&quot; /&gt;&lt;/p&gt;

&lt;h3 id=&quot;what-is-consul&quot;&gt;What Is Consul&lt;/h3&gt;

&lt;p&gt;&lt;img src=&quot;../../images/2019/10/Consul-What-Is-It.png&quot; alt=&quot;What Is Consul?&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Hashicorp follows the crawl, walk, run strategy with Consul as with all of their
other products. For example, a service mesh seems like a big bang effort but
not needed initially. Service mesh functionality in Consul was added last year whereas the top two bullets have been in place four to five years. I have
leveraged Consul for the top two bullets across large-scale data centers. From an architecture perspective, the diagram below outlines a multi-data center Consul architecture. The open-source version requires full network connectivity
between data centers, whereas, the paid for version can function in
hub-spoke model.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;../../images/2019/10/Consul-Multi-DC-Architecture.png&quot; alt=&quot;Multi-Datacenter Architecture&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Consul does require an agent on hosts to handle membership, etc. But in instances where the agent may not be an option, services for hosts can be registered via
the Consul API as an external resource. When running in a container platform (Kubernetes, Docker Swarm, etc.) the agent runs on the container host not
within the container images. The real benefit for service resiliency does require
an agent to ensure the health of all nodes and services. The agent is very light
weight and is a single binary written in Go. This binary is the same for Consul
servers and Consul clients, however the argument passed determines the mode in
which the agent executes. You can find more on the Consul agent &lt;a href=&quot;https://www.consul.io/docs/agent/basics.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;consul-connect&quot;&gt;Consul Connect&lt;/h2&gt;

&lt;!-- Courtesy of embedresponsively.com //--&gt;

&lt;div class=&quot;responsive-video-container&quot;&gt;
    &lt;iframe src=&quot;https://www.youtube-nocookie.com/embed/JQqtoWF-0pg&quot; frameborder=&quot;0&quot; webkitallowfullscreen=&quot;&quot; mozallowfullscreen=&quot;&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;
  &lt;/div&gt;

&lt;p&gt;Now what they really wanted to cover around Consul was what they are now calling
Consul connect. Consul connect provides secure communications between services
with TLS encryption and identity-based authorization that works anywhere (VMs,
containers, edge switches, etc.).&lt;/p&gt;

&lt;h3 id=&quot;consul-connect---instead-of-traditional-firewall-access-methods&quot;&gt;Consul Connect - Instead of Traditional Firewall Access Methods&lt;/h3&gt;

&lt;p&gt;One concept that Consul connect addresses is how traditional firewalls work
based on a static IP address being the identity in which access is allowed. With
Consul Connect, you can use a service identity within Consul to control access
to services.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;../../images/2019/10/Consul-Firewall-Reference.png&quot; alt=&quot;Consul Firewall Reference&quot; /&gt;&lt;/p&gt;

&lt;h3 id=&quot;consul-connect---service-access-graph&quot;&gt;Consul Connect - Service Access Graph&lt;/h3&gt;

&lt;p&gt;Because all services are known within Consul, service access graphs can be
created to allow service level access to services rather than IP based access.
Service access graphs can be created in advance prior to any services being
deployed which makes the approval process much simpler and can be in place by
the time services are deployed.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;../../images/2019/10/Consul-Service-Graph-Intentions.png&quot; alt=&quot;Consul Service Graph Intentions&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Mitchell also mentioned that Hashicorp made an announcement around &lt;a href=&quot;https://www.hashicorp.com/blog/hashicorp-consul-enterprise-supports-vmware-nsx-service-mesh-federation&quot;&gt;VMware NSX Service Mesh Federation&lt;/a&gt;. I’d
definitely recommend reading about this functionality.&lt;/p&gt;

&lt;h3 id=&quot;consul-connect---certificate-authority&quot;&gt;Consul Connect - Certificate Authority&lt;/h3&gt;

&lt;p&gt;Consul connect implements it’s own certificate authority which leverages Hashicorp
Vault. As Mitchell explains, this makes it easy to get going but realistically
no one will use this. Most will leverage an external CA for certs. This means
that the CA system within Consul connect is pluggable.&lt;/p&gt;

&lt;h3 id=&quot;consul-connect---pluggable-data-plane&quot;&gt;Consul Connect - Pluggable Data Plane&lt;/h3&gt;

&lt;p&gt;Consul connect is a control plane solution (pluggable API) that integrates with
&lt;a href=&quot;https://www.envoyproxy.io/&quot;&gt;Envoy&lt;/a&gt; and &lt;a href=&quot;https://www.haproxy.com/products/haproxy-kubernetes-ingress-controller/&quot;&gt;HAProxy&lt;/a&gt; currently. More vendor support is underway.&lt;/p&gt;

&lt;h3 id=&quot;consul-connect---mesh-gateways&quot;&gt;Consul Connect - Mesh Gateways&lt;/h3&gt;

&lt;p&gt;Based on a question from the group in regard to controlling access to services
from cloud to on-prem. Mitchell discussed Consul connect mesh gateways. This
was definitely one topic that I didn’t grasp immediately and I will definitely
need to do some deeper review of. However, if you are interested, check
out more about this &lt;a href=&quot;https://www.consul.io/docs/connect/mesh_gateway.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;../../images/2019/10/Consul-Mesh-Gateways.png&quot; alt=&quot;Consul Mesh Gateways&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;As I mentioned earlier, I worked on a large project across data centers
a few years ago in which we used Consul in each data center which also included
WAN replication. We had a tremendous amount of success with Consul. However, I
would say that with the recent amount of new functionality, there is so much
that you can do and the features seem to shine massively.&lt;/p&gt;

&lt;p&gt;With this all being said, I guess it is time for me to freshen up &lt;a href=&quot;https://github.com/mrlesmithjr/vagrant-vault-consul-docker-monitoring&quot;&gt;this&lt;/a&gt; project I was
working on quite some time ago!&lt;/p&gt;

&lt;p&gt;As always, enjoy!&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;DISCLAIMER: I have been invited to Cloud Field Day Exclusive by Gestalt IT who paid for travel, hotel, meals and transportation. I did not receive any compensation to attend CFD and I am under no obligation whatsoever to write any content related to CFD. The contents of these blog posts represent my personal opinions about the products and solutions presented during CFD.&lt;/p&gt;
&lt;/blockquote&gt;</content><author><name>Larry Smith Jr.</name></author><summary type="html">Recently while attending Cloud Field Day 6, one of the companies presenting just so happened to be Hashicorp. Now Hashicorp is one of my personal favorite companies in the open-source world. So, to say that I was extremely excited to hear them would be an understatement. Luckily for us, Mitchell Hashimoto was the one who presented to all of us delegates as I am sure everyone was excited about this.</summary></entry><entry><title type="html">CFD6 VMware API Questions</title><link href="https://everythingshouldbevirtual.com/automation/CFD6-VMware-API-Questions/" rel="alternate" type="text/html" title="CFD6 VMware API Questions" /><published>2019-10-02T20:28:00-04:00</published><updated>2019-10-02T20:28:00-04:00</updated><id>https://everythingshouldbevirtual.com/automation/CFD6-VMware-API-Questions</id><content type="html" xml:base="https://everythingshouldbevirtual.com/automation/CFD6-VMware-API-Questions/">&lt;h2 id=&quot;cfd6-vmware-api-questions&quot;&gt;CFD6 VMware API Questions&lt;/h2&gt;

&lt;p&gt;While, attending &lt;a href=&quot;https://techfieldday.com/event/cfd6/&quot;&gt;Cloud Field Day 6&lt;/a&gt; in
Silicon Valley as a delegate for Tech Field Day, Dell and VMware gave us their
pitch on &lt;a href=&quot;https://www.vmware.com/products/cloud-foundation.html&quot;&gt;VMware Cloud Foundation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To be clear, I had zero understanding of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;VCF&lt;/code&gt; prior to this. When I attend
these events I want to ask legitimately &lt;strong&gt;DUMB&lt;/strong&gt; questions as I would as a
customer hearing about a product for the first time. Now to be fair, I
absolutely love some things that VMware does, but as most other companies out
there, there are some things that do not make sense. One of the things that I
find extremely annoying from VMware is their API functionality in most cases.
And generally the annoying part for me is the documentation around API usage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IT IS GETTING BETTER&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With this all being said, I’d like to walk through what was in my head while
hearing these presentations. In the previous session to the one below, we heard
all about VXRail, etc. Very cool stuff for sure. However, as the session for
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;VCF&lt;/code&gt; was progressing, my head went to a place of absolute disorganization based
on what I was hearing. Typical for me in most cases because to be fair, I do
not generally look at solutions in the same manner that a normal consumer would.
So, as I was listening to the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;VCF&lt;/code&gt; presentation, I thought that they had done
something really cool here and brought a layer of abstraction up in the stack
that a consumer could tap into from an automation perspective. Meaning that one
could simply automate a &lt;strong&gt;FULL&lt;/strong&gt; VMware stack from a single API endpoint and not
have to worry about all of the additional layers (APIs) of everything else in
the lower levels (vCenter, vRA, vRO, NSX, etc.). So I started with my questions
to hopefully get an answer on what the real story was. Did I ask the questions
as clear as I could have? Probably not. But as usual (IMO), the lack of a clear
strategy from VMware across all products was rampant in my brain. In one instance
the reference of if you are only concerned about vCenter, then this probably is
not for you (my interpretation). In reality, I want to be concerned about the
whole stack. Full stack automation is definitely in my favor here, but I digress.
Now did I misinterpret the story being told? Possibly and I hope that is the case.
But my real point is this, every vendor needs to absolutely convey a clear story
on their strategy so that every level of consumer understands. But to be fair,
VMware is a huge company and covers a massive amount of ground. So, this is
somewhat expected but I personally think they should do a better job on telling
their complete story without all of the added layers of complexity. If I want to
do this, go over here, if I want to do that, go over there, etc., etc.&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;As I mentioned above. I truly hope that I completely misinterpreted the story
being told. If I did, I would absolutely love some additional dialog for clarity
on my part.&lt;/p&gt;

&lt;p&gt;In the video segment below, &lt;a href=&quot;https://twitter.com/Ned1313&quot;&gt;Ned Bellevance&lt;/a&gt; tees
up my questions very well. And no, we did not plan this!&lt;/p&gt;

&lt;!-- Courtesy of embedresponsively.com //--&gt;

&lt;div class=&quot;responsive-video-container&quot;&gt;
    &lt;iframe src=&quot;https://www.youtube-nocookie.com/embed/Qc1fGYJilQI?start=1752&quot; frameborder=&quot;0&quot; webkitallowfullscreen=&quot;&quot; mozallowfullscreen=&quot;&quot; allowfullscreen=&quot;&quot;&gt;&lt;/iframe&gt;
  &lt;/div&gt;

&lt;p&gt;In addition to all of this, &lt;a href=&quot;https://twitter.com/CTOAdvisor&quot;&gt;Keith Townsend&lt;/a&gt;
just published a very good article around the &lt;a href=&quot;https://www.thectoadvisor.com/blog/2019/10/2/vmwares-cloud-in-300-words&quot;&gt;VMware’s Cloud in 300-words&lt;/a&gt; that you
should definitely check out as well.&lt;/p&gt;

&lt;p&gt;As always, enjoy!&lt;/p&gt;</content><author><name>Larry Smith Jr.</name></author><category term="Automation" /><category term="VMware" /><summary type="html">CFD6 VMware API Questions</summary></entry></feed>