diff --git a/TSG/EnvironmentValidator/Networking/README.md b/TSG/EnvironmentValidator/Networking/README.md
index 2119ae8..98e28fa 100644
--- a/TSG/EnvironmentValidator/Networking/README.md
+++ b/TSG/EnvironmentValidator/Networking/README.md
@@ -4,18 +4,47 @@ For Azure Local Network Resources not related to Environment Validator, see [TSG
## 📚 Table of Contents
+- [Troubleshoot: Add Node Network ATC Service](Troubleshoot-Network-Test-AddNode-NetworkATC-Service.md)
+- [Troubleshoot: AKS DNS Server CIDR Overlap](Troubleshoot-Network-Test-AKS-DnsServer-Cidr-Overlap.md)
+- [Troubleshoot: AKS POD CIDR IP Pool Overlap](Troubleshoot-Network-Test-AKS-PodCidr-IpPool-Overlap.md)
+- [Troubleshoot: AKS Proxy Server CIDR Overlap](Troubleshoot-Network-Test-AKS-ProxyServer-Cidr-Overlap.md)
+- [Troubleshoot: AKS Service CIDR IP Pool Overlap](Troubleshoot-Network-Test-AKS-ServiceCidr-IpPool-Overlap.md)
+- [Troubleshoot: Cluster Network Intent Status](Troubleshoot-Network-Test-Cluster-Intent-Status.md)
+- [Troubleshoot: Cluster Management Intent Exists](Troubleshoot-Network-Test-Cluster-MgmtIntent-Exists.md)
+- [Troubleshoot: Cluster Storage Intent Exists](Troubleshoot-Network-Test-Cluster-StorageIntent-Exists.md)
- [Troubleshoot: Host Network Configuration Readiness](Troubleshoot-Network-Test-HostNetworkConfigurationReadiness.md)
+- [Troubleshoot: Infrastructure IP Azure Endpoint Connection](Troubleshoot-Network-Test-InfraIP-Azure-Endpoint-Connection.md)
+- [Troubleshoot: Infrastructure IP DNS Client Readiness](Troubleshoot-Network-Test-InfraIP-DNS-Client-Readiness.md)
+- [Troubleshoot: Infrastructure IP DNS Port 53 Connection](Troubleshoot-Network-Test-InfraIP-DNS-Port-53.md)
+- [Troubleshoot: Infrastructure IP Hyper-V Readiness](Troubleshoot-Network-Test-InfraIP-Hyper-V-Readiness.md)
+- [Troubleshoot: Infrastructure IP IP Readiness](Troubleshoot-Network-Test-InfraIP-IPReadiness.md)
+- [Troubleshoot: Infrastructure IP vNIC Readiness](Troubleshoot-Network-Test-InfraIP-vNIC-Readiness.md)
+- [Troubleshoot: Infrastructure IP VMSwitch Readiness](Troubleshoot-Network-Test-InfraIP-VMSwitch-Readiness.md)
- [Troubleshoot: Infrastructure IP Pool Readiness](Troubleshoot-Network-Test-InfraIpPoolReadiness.md)
+- [Troubleshoot: Intent Virtual Adapter Existence](Troubleshoot-Network-Test-IntentVirtualAdapterExistence.md)
- [Troubleshoot: Management Adapter Readiness](Troubleshoot-Network-Test-ManagementAdapterReadiness.md)
+- [Troubleshoot: Management IP Connection](Troubleshoot-Network-Test-MgmtIP-Connection.md)
+- [Troubleshoot: Management IP in Infrastructure Subnet](Troubleshoot-Network-Test-MgmtIp-In-InfraSubnet.md)
+- [Troubleshoot: Management IP Not in Infrastructure Pool](Troubleshoot-Network-Test-MgmtIp-NotIn-InfraPool.md)
+- [Troubleshoot: Management IP Not Overlap Storage Subnet](Troubleshoot-Network-Test-MgmtIP-Not-Overlap-Storage-Subnet.md)
+- [Troubleshoot: Management IP On Correct Adapter](Troubleshoot-Network-Test-MgmtIP-On-Correct-Adapter.md)
- [Troubleshoot: MOC Stack Network Port](Troubleshoot-Network-Test-MOCStackNetworkPort.md)
- [Troubleshoot: Network Adapter Driver Consistency Check](Troubleshoot-Network-Test-NetworkAdapter-DriverConsistency.md)
- [Troubleshoot: Network Adapter Inbox Driver Check](Troubleshoot-Network-Test-NetworkAdapter-InboxDriver.md)
- [Troubleshoot: Network Default Gateway Requirement](Troubleshoot-Network-Test-NetworkDefaultGatewayRequirement.md)
+- [Troubleshoot: Network Intent Requirement](Troubleshoot-Network-Test-NetworkIntentRequirement.md)
+- [Troubleshoot: New Node Duplicate IP](Troubleshoot-Network-Test-NewNode-Duplicate-IP.md)
+- [Troubleshoot: New Node First Adapter Validity](Troubleshoot-Network-Test-NewNode-First-Adapter.md)
+- [Troubleshoot: New Node IP Conflict](Troubleshoot-Network-Test-NewNode-IP-Conflict.md)
+- [Troubleshoot: New Node Name and IP Match](Troubleshoot-Network-Test-NewNode-Name-IP-Match.md)
+- [Troubleshoot: New Node Outside Management Range](Troubleshoot-Network-Test-NewNode-Outside-MgmtRange.md)
- [Troubleshoot: Storage Adapter IP Configuration](Troubleshoot-Network-Test-StorageAdapterIPConfiguration.md)
- [Troubleshoot: Storage Adapter Readiness](Troubleshoot-Network-Test-StorageAdapterReadiness.md)
- [Troubleshoot: Storage Connections Connectivity Check](Troubleshoot-Network-Test-StorageConnections-ConnectivityCheck.md)
- [Troubleshoot: Storage Connections No Validation Method](Troubleshoot-Network-Test-StorageConnections-NoValidationMethod.md)
- [Troubleshoot: Storage Connections VMSwitch Configuration](Troubleshoot-Network-Test-StorageConnections-VMSwitch-Configuration.md)
+- [Troubleshoot: Storage Connectivity Type](Troubleshoot-Network-Test-StorageConnectivityType.md)
+- [Troubleshoot: Storage VLAN for 2-Node Switchless](Troubleshoot-Network-Test-StorageVlan-2Node-Switchless.md)
---
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-AKS-DnsServer-Cidr-Overlap.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-AKS-DnsServer-Cidr-Overlap.md
new file mode 100644
index 0000000..078abf1
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-AKS-DnsServer-Cidr-Overlap.md
@@ -0,0 +1,167 @@
+# AzureLocal_Network_Test_AKS_Subnet_POD_SERVICE_CIDR_DNSServer_Overlap
+
+
+
+ | Name |
+ AzureLocal_Network_Test_AKS_Subnet_POD_SERVICE_CIDR_DNSServer_Overlap |
+
+
+ | Severity |
+ Informational: This validator provides information but will not block operations. |
+
+
+ | Applicable Scenarios |
+ Deployment |
+
+
+
+## Overview
+
+This validator checks that DNS server addresses configured on the management adapter do not overlap with the Kubernetes POD CIDR or Service CIDR subnets. While this is informational and will not block deployment, DNS servers in these ranges may indicate a configuration issue.
+
+## Requirements
+
+DNS servers should meet the following recommendation:
+1. DNS server IP addresses should not fall within the POD CIDR subnet (default: `10.244.0.0/16`)
+2. DNS server IP addresses should not fall within the Service CIDR subnet (default: `10.96.0.0/12`)
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information about the DNS server addresses and whether they overlap with AKS CIDR ranges.
+
+```json
+{
+ "Name": "AzureLocal_Network_Test_AKS_Subnet_POD_SERVICE_CIDR_DNSServer_Overlap",
+ "DisplayName": "Test for DNS server overlaps with POD CIDR Subnet 10.244.0.0/16 and Service CIDR Subnet 10.96.0.0/12",
+ "Title": "Test for DNS server overlaps with POD CIDR Subnet 10.244.0.0/16 and Service CIDR Subnet 10.96.0.0/12",
+ "Status": 1,
+ "Severity": 0,
+ "Description": "Checking DNS server address(es) not within the POD CIDR Subnet 10.244.0.0/16 and Service CIDR Subnet 10.96.0.0/12",
+ "Remediation": "Verify DNS servers configured are not overlapping with AKS pre-defined POD subnet and Service subnet. Check https://learn.microsoft.com/en-us/azure/aks/aksarc/aks-hci-ip-address-planning for more information.",
+ "TargetResourceID": "DNSServer-10.244.0.10",
+ "TargetResourceName": "DNSServer-10.244.0.10",
+ "TargetResourceType": "DNSServer-10.244.0.10",
+ "Timestamp": "\\/Date(timestamp)\\/",
+ "AdditionalData": {
+ "Source": "DNSServerPODServiceCIDR",
+ "Resource": "DNSServerPODServiceCIDR",
+ "Detail": "DNS server address(es): 10.244.0.10. POD CIDR: 10.244.0.0/16; Service CIDR: 10.96.0.0/12",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Informational: DNS Server Overlaps with POD or Service CIDR
+
+**Message:**
+```text
+DNS server address(es): 10.244.0.10. POD CIDR: 10.244.0.0/16; Service CIDR: 10.96.0.0/12
+```
+
+**Description:** The DNS server address configured on the management adapter overlaps with the POD CIDR or Service CIDR subnet. This is informational and will not block deployment, but may indicate a configuration issue or potential routing conflicts. Microsoft will upgrade the severity of this validator in the future.
+
+#### Recommended Actions
+
+##### Verify DNS Server Configuration
+
+1. Check the DNS server configuration on the management adapter:
+
+ ```powershell
+ # Get DNS configuration from appropriate adapter
+ # change $adapterName to the adapter name defined in your system.
+ $adapterName = "myAdapterName"
+ Get-DnsClientServerAddress -InterfaceAlias $adapterName -AddressFamily IPv4
+ ```
+
+2. Verify the DNS server IP addresses:
+ - Check if the reported DNS server IP is correct
+ - Verify the DNS server is reachable and functional
+ - Confirm this is the intended DNS server for your environment
+
+3. If the DNS server address is incorrect or in a conflicting range, consider reconfiguring it:
+
+ ```powershell
+ # Set DNS server to an address outside the CIDR ranges
+ # Replace with your actual DNS server address
+ $dnsServers = @("192.168.1.1", "192.168.1.2")
+ Set-DnsClientServerAddress -InterfaceAlias $adapterName -ServerAddresses $dnsServers
+ ```
+
+4. Verify the new DNS configuration:
+
+ ```powershell
+ Get-DnsClientServerAddress -InterfaceAlias $adapterName -AddressFamily IPv4
+
+ # Test DNS resolution
+ Resolve-DnsName "microsoft.com"
+ ```
+
+##### Understanding the Impact
+
+Having a DNS server in the POD or Service CIDR ranges may cause:
+- **Routing conflicts**: DNS queries may be misdirected when AKS workloads are deployed
+- **Connectivity issues**: DNS resolution may fail for pods or services
+- **Network complexity**: Troubleshooting becomes more difficult
+
+However, if the DNS server is:
+- Actually located at that address and functioning correctly
+- Properly routed and accessible
+- Not conflicting with AKS workload networking
+
+Then you may choose to accept this configuration and monitor for issues.
+
+##### When to Reconfigure
+
+Consider reconfiguring the DNS server address if:
+1. The DNS server is a placeholder or temporary configuration
+2. You plan to deploy AKS workloads on this cluster
+3. You want to avoid potential future networking conflicts
+4. You have DNS servers available in non-conflicting ranges
+
+##### When It's Acceptable to Proceed
+
+You may proceed without changes if:
+1. The DNS server is intentionally placed in this range
+2. You have verified it does not conflict with your AKS deployment plans
+3. Your network routing is configured to handle this scenario
+4. You've documented this configuration for future reference
+
+---
+
+## Additional Information
+
+### Default AKS CIDR Ranges
+
+- **POD CIDR**: `10.244.0.0/16` (default)
+ - Range: `10.244.0.0` to `10.244.255.255`
+ - Reserved for Kubernetes pod IP addresses
+- **Service CIDR**: `10.96.0.0/12` (default)
+ - Range: `10.96.0.0` to `10.111.255.255`
+ - Reserved for Kubernetes service IP addresses
+
+### Recommended DNS Server Placement
+
+For optimal network design, place DNS servers in ranges that do not conflict with:
+- POD CIDR: `10.244.0.0/16`
+- Service CIDR: `10.96.0.0/12`
+- Infrastructure IP pools
+
+**Common DNS server locations:**
+- Corporate DNS: Use your organization's existing DNS infrastructure
+- On-premises DNS: Typically in your datacenter's management network
+- Cloud DNS: Azure DNS or other cloud-provided DNS services
+
+**Example non-conflicting ranges:**
+- `192.168.x.x` - Private network range
+- `10.0.x.x` - Private network range (avoid `10.96-10.111` and `10.244`)
+- `172.16.x.x` - Private network range
+
+### Related Documentation
+
+- [AKS IP Address Planning](https://learn.microsoft.com/en-us/azure/aks/aksarc/aks-hci-ip-address-planning)
+- [Azure Local Network Requirements](https://learn.microsoft.com/en-us/azure-stack/hci/concepts/host-network-requirements)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-AKS-PodCidr-IpPool-Overlap.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-AKS-PodCidr-IpPool-Overlap.md
new file mode 100644
index 0000000..5a0388e
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-AKS-PodCidr-IpPool-Overlap.md
@@ -0,0 +1,148 @@
+# AzStackHci_Network_Test_AKS_Subnet_POD_CIDR_IP_Range_Overlap
+
+
+
+ | Name |
+ AzStackHci_Network_Test_AKS_Subnet_POD_CIDR_IP_Range_Overlap |
+
+
+ | Severity |
+ Critical: This validator will block operations until remediated. |
+
+
+ | Applicable Scenarios |
+ Deployment |
+
+
+
+## Overview
+
+This validator checks that IP pool ranges (StartingAddress to EndingAddress) do not overlap with the Kubernetes POD CIDR subnet. The POD CIDR is reserved for Kubernetes pod networking in AKS workloads, and customer IP pools must not conflict with this range.
+
+## Requirements
+
+IP pools must meet the following requirement:
+1. IP pool StartingAddress and EndingAddress must not fall within the POD CIDR subnet (default: `10.244.0.0/16`)
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information about which IP pool is overlapping with the POD CIDR. The `TargetResourceID` field shows the specific IP pool range.
+
+```json
+{
+ "Name": "AzStackHci_Network_Test_AKS_Subnet_POD_CIDR_IP_Range_Overlap",
+ "DisplayName": "Test for overlaps with POD CIDR Subnet 10.244.0.0/16",
+ "Title": "Test for overlaps with POD CIDR Subnet 10.244.0.0/16",
+ "Status": 1,
+ "Severity": 2,
+ "Description": "Checking start and end address are not within the POD CIDR Subnet 10.244.0.0/16",
+ "Remediation": "Verify IP pool(s) are not overlapping with AKS pre-defined POD subnet. Check https://learn.microsoft.com/en-us/azure/aks/aksarc/aks-hci-ip-address-planning for more information.",
+ "TargetResourceID": "IpPool-10.244.1.10-10.244.1.20",
+ "TargetResourceName": "ManagementIPRange",
+ "TargetResourceType": "Network Range",
+ "Timestamp": "",
+ "AdditionalData": {
+ "Source": "CustomerNetwork",
+ "Resource": "CustomerSubnet",
+ "Detail": "Start IP '10.244.1.10' and End IP '10.244.1.20' overlaps with the POD subnet: 10.244.0.0/16.",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: IP Range Overlaps with POD CIDR
+
+**Error Message:**
+```text
+IP Range: 10.244.1.10 - 10.244.1.20 overlaps with K8s Default POD CIDR: 10.244.0.0/16. Please reconfigure the network to resolve this conflict.
+```
+
+**Root Cause:** The IP pool range overlaps with the Kubernetes POD CIDR subnet (default: `10.244.0.0/16`). This creates a critical conflict because these addresses are reserved for Kubernetes pod networking in AKS workloads.
+
+#### Remediation Steps
+
+##### Reconfigure IP Pool to Avoid POD CIDR Range
+
+The IP pool must be reconfigured to use addresses outside the POD CIDR range (`10.244.0.0/16` by default).
+
+1. Review the current IP pool configuration to identify the overlapping range. Look at the `TargetResourceID` field in the error message to identify the specific IP pool (e.g., `IpPool-10.244.1.10-10.244.1.20`).
+
+2. Understand the POD CIDR range:
+ - Default POD CIDR: `10.244.0.0/16`
+ - This means addresses from `10.244.0.0` to `10.244.255.255` are reserved
+ - Your IP pools must be outside this entire range
+
+3. Choose a new IP range that does not overlap with the POD CIDR subnet. Recommended alternatives:
+ - `10.0.x.x/24` - Common for small networks
+ - `192.168.x.x/24` - Private network range
+ - `172.16.x.x` through `172.31.x.x` - Private network range
+ - Any other subnet in your enterprise that doesn't conflict with `10.244.0.0/16`
+
+4. Update the IP pool configuration through your deployment method:
+
+ **For Azure portal deployment:**
+ - Modify the IP pool configuration in the deployment wizard before proceeding
+
+ **Example IP pool configuration (outside POD CIDR):**
+ ```powershell
+ # Example: Change from 10.244.1.10-10.244.1.20 to 10.0.1.10-10.0.1.20
+ $ipPool = @{
+ StartingAddress = "10.0.1.10"
+ EndingAddress = "10.0.1.20"
+ }
+ ```
+
+5. Verify the new IP range does not conflict with:
+ - POD CIDR: `10.244.0.0/16`
+ - Service CIDR: `10.96.0.0/12` (see related validator)
+ - Other network segments in your environment
+
+6. Retry the validation after reconfiguring the IP pool.
+
+> **Important**: Ensure the new IP range:
+> - Is routable within your network
+> - Does not conflict with existing infrastructure
+> - Has sufficient capacity for your deployment needs
+> - Is in the same subnet as node management IPs
+
+---
+
+## Additional Information
+
+### Default AKS CIDR Ranges
+
+- **POD CIDR**: `10.244.0.0/16` (default, can be customized)
+ - Reserved for Kubernetes pod IP addresses
+ - Critical: Must not overlap with customer networks
+- **Service CIDR**: `10.96.0.0/12` (default)
+ - Reserved for Kubernetes service IP addresses
+ - Warning: Should not overlap with customer networks
+
+### Understanding the POD CIDR Range
+
+The POD CIDR `10.244.0.0/16` includes all addresses from:
+- **Start**: `10.244.0.0`
+- **End**: `10.244.255.255`
+- **Total addresses**: 65,536 addresses
+
+Your IP pools must use addresses completely outside this range.
+
+### Recommended IP Addressing Schemes
+
+| Use Case | Recommended Range | CIDR Notation | Addresses |
+|----------|------------------|---------------|-----------|
+| Small deployment | 192.168.1.0 - 192.168.1.255 | 192.168.1.0/24 | 254 |
+| Medium deployment | 10.0.0.0 - 10.0.255.255 | 10.0.0.0/16 | 65,534 |
+| Large deployment | 172.16.0.0 - 172.31.255.255 | 172.16.0.0/12 | 1,048,574 |
+
+Avoid using `10.244.x.x` or `10.96.x.x` to prevent conflicts with AKS default ranges.
+
+### Related Documentation
+
+- [AKS IP Address Planning](https://learn.microsoft.com/en-us/azure/aks/aksarc/aks-hci-ip-address-planning)
+- [Azure Local Network Requirements](https://learn.microsoft.com/en-us/azure-stack/hci/concepts/host-network-requirements)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-AKS-ProxyServer-Cidr-Overlap.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-AKS-ProxyServer-Cidr-Overlap.md
new file mode 100644
index 0000000..595f071
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-AKS-ProxyServer-Cidr-Overlap.md
@@ -0,0 +1,256 @@
+# AzureLocal_Network_Test_AKS_Subnet_POD_CIDR_ProxyServer_Overlap
+
+
+
+ | Name |
+ AzureLocal_Network_Test_AKS_Subnet_POD_CIDR_ProxyServer_Overlap |
+
+
+ | Severity |
+ Informational: This validator provides information but will not block operations. |
+
+
+ | Applicable Scenarios |
+ Deployment |
+
+
+
+## Overview
+
+This validator checks that proxy server addresses configured on the system do not overlap with the Kubernetes POD CIDR or Service CIDR subnets. While this is informational and will not block deployment, proxy servers in these ranges may cause routing conflicts or connectivity issues with AKS workloads.
+
+## Requirements
+
+Proxy servers should meet the following recommendation:
+1. Proxy server IP addresses should not fall within the POD CIDR subnet (default: `10.244.0.0/16`)
+2. Proxy server IP addresses should not fall within the Service CIDR subnet (default: `10.96.0.0/12`)
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information about the proxy server addresses and whether they overlap with AKS CIDR ranges.
+
+```json
+{
+ "Name": "AzureLocal_Network_Test_AKS_Subnet_POD_CIDR_ProxyServer_Overlap",
+ "DisplayName": "Test for Proxy server overlaps with POD CIDR Subnet 10.244.0.0/16 and Service CIDR Subnet 10.96.0.0/12",
+ "Title": "Test for Proxy server overlaps with POD CIDR Subnet 10.244.0.0/16 and Service CIDR Subnet 10.96.0.0/12",
+ "Status": 1,
+ "Severity": 0,
+ "Description": "Checking Proxy server address(es) not within the POD CIDR Subnet 10.244.0.0/16 and Service CIDR Subnet 10.96.0.0/12",
+ "Remediation": "Verify IP of the proxy server(s) configured are not overlapping with AKS pre-defined POD subnet and Service subnet. Check https://learn.microsoft.com/en-us/azure/aks/aksarc/aks-hci-ip-address-planning for more information.",
+ "TargetResourceID": "ProxyServer-proxyserver.contoso.com",
+ "TargetResourceName": "ProxyServer-proxyserver.contoso.com",
+ "TargetResourceType": "ProxyServer-proxyserver.contoso.com",
+ "Timestamp": "",
+ "AdditionalData": {
+ "Source": "ProxyServerPODServiceCIDR",
+ "Resource": "ProxyServerPODServiceCIDR",
+ "Detail": "Proxy server address(es): proxyserver.contoso.com. POD CIDR: 10.244.0.0/16; Service CIDR: 10.96.0.0/12",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Informational: Proxy Server Overlaps with POD or Service CIDR
+
+**Message:**
+```text
+Proxy server address(es): proxyserver.contoso.com. POD CIDR: 10.244.0.0/16; Service CIDR: 10.96.0.0/12
+```
+
+**Description:** The proxy server address configured on the system overlaps with the POD CIDR or Service CIDR subnet. This is informational and will not block deployment, but may indicate a configuration issue or potential routing conflicts when deploying AKS workloads. Microsoft will upgrade the severity level in future.
+
+#### Recommended Actions
+
+##### Verify Proxy Server Configuration
+
+The validator checks proxy configuration from three sources:
+- WinHTTP proxy settings
+- WinINET proxy settings
+- Environment variables (HTTP_PROXY and HTTPS_PROXY)
+
+1. Check the current proxy configuration:
+
+ ```powershell
+ # Check WinHTTP proxy settings
+ netsh winhttp show proxy
+
+ # Check WinINET proxy settings (Internet Explorer proxy)
+ Get-ItemProperty -Path "HKCU:\Software\Microsoft\Windows\CurrentVersion\Internet Settings" | Select-Object ProxyEnable, ProxyServer
+
+ # Check environment variables
+ [Environment]::GetEnvironmentVariable("HTTP_PROXY", "Machine")
+ [Environment]::GetEnvironmentVariable("HTTPS_PROXY", "Machine")
+ [Environment]::GetEnvironmentVariable("HTTP_PROXY", "User")
+ [Environment]::GetEnvironmentVariable("HTTPS_PROXY", "User")
+ ```
+
+2. Resolve the proxy server hostname to verify its IP address:
+
+ ```powershell
+ # Replace with your proxy server hostname from the error message
+ $proxyHostname = "proxyserver.contoso.com"
+ [System.Net.Dns]::GetHostAddresses($proxyHostname) | Select-Object IPAddressToString
+ ```
+
+3. Check if the resolved IP address is in the POD CIDR or Service CIDR range:
+ - POD CIDR: `10.244.0.0/16` (range: `10.244.0.0` to `10.244.255.255`)
+ - Service CIDR: `10.96.0.0/12` (range: `10.96.0.0` to `10.111.255.255`)
+
+##### Reconfigure Proxy Settings (If Needed)
+
+If the proxy server IP address conflicts with AKS CIDR ranges, consider one of these options:
+
+**Option 1: Use a different proxy server**
+
+If available, configure a proxy server that is not in the conflicting range:
+
+```powershell
+# Set WinHTTP proxy
+netsh winhttp set proxy proxy-server="newproxy.contoso.com:8080" bypass-list=""
+
+# Set environment variables
+[Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://newproxy.contoso.com:8080", "Machine")
+[Environment]::SetEnvironmentVariable("HTTPS_PROXY", "http://newproxy.contoso.com:8080", "Machine")
+```
+
+**Option 2: Contact your network administrator**
+
+If the proxy server is managed by your network team:
+1. Inform them about the IP address conflict with AKS CIDR ranges
+2. Request that the proxy server be moved to a non-conflicting IP address
+3. Update the proxy configuration once the change is made
+
+**Option 3: Remove proxy configuration (if not required)**
+
+If the proxy is not required for your deployment:
+
+```powershell
+# Remove WinHTTP proxy
+netsh winhttp reset proxy
+
+# Remove environment variables
+[Environment]::SetEnvironmentVariable("HTTP_PROXY", $null, "Machine")
+[Environment]::SetEnvironmentVariable("HTTPS_PROXY", $null, "Machine")
+```
+
+> **Warning**: Only remove proxy configuration if you're certain it's not required for internet connectivity or deployment requirements.
+
+##### Verify Changes
+
+After making changes, verify the new configuration:
+
+```powershell
+# Check WinHTTP proxy
+netsh winhttp show proxy
+
+# Check environment variables
+[Environment]::GetEnvironmentVariable("HTTP_PROXY", "Machine")
+[Environment]::GetEnvironmentVariable("HTTPS_PROXY", "Machine")
+
+# Test connectivity through new proxy
+$proxyUri = "http://newproxy.contoso.com:8080"
+$webRequest = [System.Net.WebRequest]::Create("https://www.microsoft.com")
+$webRequest.Proxy = New-Object System.Net.WebProxy($proxyUri)
+$response = $webRequest.GetResponse()
+$response.StatusCode
+$response.Close()
+```
+
+##### Understanding the Impact
+
+Having a proxy server in the POD or Service CIDR ranges may cause:
+- **Routing conflicts**: Proxy traffic may be misdirected when AKS workloads are deployed
+- **Connectivity issues**: Outbound connections through the proxy may fail
+- **AKS deployment problems**: Container image pulls may fail
+- **Service mesh conflicts**: If using service mesh, proxy configuration may interfere
+
+##### When to Reconfigure
+
+Consider reconfiguring the proxy server if:
+1. You plan to deploy AKS workloads on this cluster
+2. The proxy server can be moved to a non-conflicting IP address
+3. You want to avoid potential future networking conflicts
+4. Your network team can provide an alternative proxy
+
+##### When It's Acceptable to Proceed
+
+You may proceed without changes if:
+1. The proxy server is intentionally placed in this range and properly configured
+2. Your network routing explicitly handles this scenario
+3. You've verified no conflicts exist with your specific AKS deployment plans
+4. You've documented this configuration for future reference
+5. You've tested connectivity and confirmed it works as expected
+
+---
+
+## Additional Information
+
+### Default AKS CIDR Ranges
+
+- **POD CIDR**: `10.244.0.0/16` (default)
+ - Range: `10.244.0.0` to `10.244.255.255`
+ - Reserved for Kubernetes pod IP addresses
+- **Service CIDR**: `10.96.0.0/12` (default)
+ - Range: `10.96.0.0` to `10.111.255.255`
+ - Reserved for Kubernetes service IP addresses
+
+### Proxy Configuration Sources
+
+The validator checks the following proxy configuration sources:
+
+1. **WinHTTP Proxy**:
+ - System-wide proxy settings
+ - Used by Windows services and applications
+ - Configured via `netsh winhttp`
+
+2. **WinINET Proxy**:
+ - User-level proxy settings (Internet Explorer proxy)
+ - Used by some applications
+ - Configured via Internet Options or registry
+
+3. **Environment Variables**:
+ - `HTTP_PROXY` (Machine and User level)
+ - `HTTPS_PROXY` (Machine and User level)
+ - Used by many command-line tools and applications
+
+All unique proxy servers found in these sources are checked against the AKS CIDR ranges.
+
+### DNS Resolution
+
+The validator resolves proxy server hostnames to IP addresses using DNS. If DNS resolution fails for a proxy server hostname, that proxy is skipped in the validation.
+
+### Recommended Proxy Server Placement
+
+For optimal network design, place proxy servers in ranges that do not conflict with:
+- POD CIDR: `10.244.0.0/16`
+- Service CIDR: `10.96.0.0/12`
+- Infrastructure IP pools
+
+**Example non-conflicting ranges:**
+- `192.168.x.x` - Private network range
+- `10.0.x.x` - Private network range (avoid `10.96-10.111` and `10.244`)
+- `172.16.x.x` - Private network range
+
+### Proxy Bypass Configuration
+
+If you must keep a proxy server in a conflicting range, consider configuring bypass lists to exclude AKS-related traffic:
+
+```powershell
+# Example: Set proxy with bypass list
+netsh winhttp set proxy proxy-server="proxyserver.contoso.com:8080" bypass-list="*.local;10.244.*;10.96.*;localhost"
+```
+
+This ensures that traffic to AKS CIDR ranges bypasses the proxy, reducing potential conflicts.
+
+### Related Documentation
+
+- [AKS IP Address Planning](https://learn.microsoft.com/en-us/azure/aks/aksarc/aks-hci-ip-address-planning)
+- [Azure Local Network Requirements](https://learn.microsoft.com/en-us/azure-stack/hci/concepts/host-network-requirements)
+- [Proxy configuration for Azure Local](https://learn.microsoft.com/en-us/azure-stack/hci/manage/configure-proxy-settings)
+- [Configure WinHTTP proxy settings](https://learn.microsoft.com/en-us/windows/win32/winhttp/winhttp-autoproxy-support)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-AKS-ServiceCidr-IpPool-Overlap.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-AKS-ServiceCidr-IpPool-Overlap.md
new file mode 100644
index 0000000..d68a278
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-AKS-ServiceCidr-IpPool-Overlap.md
@@ -0,0 +1,184 @@
+# AzStackHci_Network_Test_AKS_Subnet_Service_CIDR_IP_Range_Overlap
+
+
+
+ | Name |
+ AzStackHci_Network_Test_AKS_Subnet_Service_CIDR_IP_Range_Overlap |
+
+
+ | Severity |
+ Warning: This validator will not block operations but may result in suboptimal network conditions. |
+
+
+ | Applicable Scenarios |
+ Deployment |
+
+
+
+## Overview
+
+This validator checks that IP pool ranges (StartingAddress to EndingAddress) do not overlap with the Kubernetes Service CIDR subnet. While this is a warning rather than a critical error, overlaps with the Service CIDR may result in suboptimal network conditions or connectivity issues with AKS workloads.
+
+## Requirements
+
+IP pools should meet the following requirement:
+1. IP pool StartingAddress and EndingAddress should not fall within the Service CIDR subnet (default: `10.96.0.0/12`)
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information about which IP pool is overlapping with the Service CIDR. The `TargetResourceID` field shows the specific IP pool range.
+
+```json
+{
+ "Name": "AzStackHci_Network_Test_AKS_Subnet_Service_CIDR_IP_Range_Overlap",
+ "DisplayName": "Test for overlaps with Service CIDR IP Subnet 10.96.0.0/12",
+ "Title": "Test for overlaps with Service CIDR IP Subnet 10.96.0.0/12",
+ "Status": 1,
+ "Severity": 1,
+ "Description": "Checking start and end address are not within the Service CIDR Subnet",
+ "Remediation": "Verify IP pool(s) are not overlapping with AKS pre-defined Service subnet. Check https://learn.microsoft.com/en-us/azure/aks/aksarc/aks-hci-ip-address-planning for more information.",
+ "TargetResourceID": "IpPool-10.100.1.10-10.100.1.20",
+ "TargetResourceName": "ManagementIPRange",
+ "TargetResourceType": "Network Range",
+ "Timestamp": "/",
+ "AdditionalData": {
+ "Source": "CustomerNetwork",
+ "Resource": "CustomerSubnet",
+ "Detail": "Start IP '10.244.1.10' and End IP '10.244.1.20' overlaps with the Service subnet: 10.96.0.0/12.",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Warning: IP Range Overlaps with Service CIDR
+
+**Warning Message:**
+```text
+IP Range: 10.100.1.10 - 10.100.1.20 overlaps with K8s Default Service CIDR: 10.96.0.0/12. Be aware that this many result in suboptimal network conditions.
+```
+
+**Description:** The IP pool range overlaps with the Kubernetes Service CIDR subnet (default: `10.96.0.0/12`). While this is a warning and not critical, it may cause network routing issues or conflicts when deploying AKS workloads.
+
+#### Remediation Steps
+
+##### Option 1: Reconfigure IP Pool to Avoid Service CIDR Range (Recommended)
+
+While this is a warning and not critical, it is strongly recommended to reconfigure the IP pool to avoid potential network issues.
+
+1. Review the current IP pool configuration to identify the overlapping range. Look at the `TargetResourceID` field in the error message to identify the specific IP pool (e.g., `IpPool-10.100.1.10-10.100.1.20`).
+
+2. Understand the Service CIDR range:
+ - Default Service CIDR: `10.96.0.0/12`
+ - This means addresses from `10.96.0.0` to `10.111.255.255` are reserved
+ - Your IP pools should be outside this entire range
+
+3. Choose a new IP range that does not overlap with the Service CIDR subnet. Recommended alternatives:
+ - `10.0.x.x/24` - Common for small networks (note: not in `10.96-10.111` range)
+ - `192.168.x.x/24` - Private network range
+ - `172.16.x.x` through `172.31.x.x` - Private network range
+ - `10.112.x.x` and higher (outside the `10.96.0.0/12` range)
+
+4. Update the IP pool configuration through your deployment method:
+
+ **For Azure portal deployment:**
+ - Modify the IP pool configuration in the deployment wizard before proceeding
+
+ **For PowerShell deployment:**
+ - Update the IP pool parameters in your deployment script or configuration file
+
+ **Example IP pool configuration (outside Service CIDR):**
+ ```powershell
+ # Example: Change from 10.100.1.10-10.100.1.20 to 10.0.1.10-10.0.1.20
+ $ipPool = @{
+ StartingAddress = "10.0.1.10"
+ EndingAddress = "10.0.1.20"
+ }
+ ```
+
+5. Verify the new IP range does not conflict with:
+ - POD CIDR: `10.244.0.0/16` (see related validator)
+ - Service CIDR: `10.96.0.0/12`
+ - Other network segments in your environment
+
+6. Retry the validation after reconfiguring the IP pool.
+
+> **Note**: Ensure the new IP range:
+> - Is routable within your network
+> - Does not conflict with existing infrastructure
+> - Has sufficient capacity for your deployment needs
+> - Is in the same subnet as node management IPs
+
+##### Option 2: Accept the Warning and Proceed (Not Recommended)
+
+If reconfiguring the IP pool is not feasible, you may proceed with the deployment, but be aware that:
+- This may result in suboptimal network conditions
+- Connectivity issues with AKS workloads may occur
+- Network routing problems may arise
+- You may need to reconfigure the network later to resolve issues
+
+**If you choose to proceed:**
+1. Document the IP overlap in your deployment notes
+2. Monitor for network connectivity issues after deployment
+3. Be prepared to reconfigure the network if problems arise
+
+---
+
+## Additional Information
+
+### Default AKS CIDR Ranges
+
+- **POD CIDR**: `10.244.0.0/16` (default, can be customized)
+ - Reserved for Kubernetes pod IP addresses
+ - Critical: Must not overlap with customer networks
+- **Service CIDR**: `10.96.0.0/12` (default)
+ - Reserved for Kubernetes service IP addresses
+ - Warning: Should not overlap with customer networks
+
+### Understanding the Service CIDR Range
+
+The Service CIDR `10.96.0.0/12` includes all addresses from:
+- **Start**: `10.96.0.0`
+- **End**: `10.111.255.255`
+- **Total addresses**: 1,048,576 addresses
+
+Your IP pools should use addresses completely outside this range.
+
+### Service CIDR Breakdown
+
+The `/12` CIDR includes these `/16` subnets:
+- `10.96.0.0/16` through `10.111.0.0/16` (16 subnets total)
+
+To avoid overlap, use IP addresses outside the range `10.96.0.0 - 10.111.255.255`.
+
+### Recommended IP Addressing Schemes
+
+| Use Case | Recommended Range | CIDR Notation | Addresses |
+|----------|------------------|---------------|-----------|
+| Small deployment | 192.168.1.0 - 192.168.1.255 | 192.168.1.0/24 | 254 |
+| Medium deployment | 10.0.0.0 - 10.0.255.255 | 10.0.0.0/16 | 65,534 |
+| Large deployment | 172.16.0.0 - 172.31.255.255 | 172.16.0.0/12 | 1,048,574 |
+| Alternative in 10.x.x.x | 10.112.0.0 - 10.127.255.255 | 10.112.0.0/12 | 1,048,574 |
+
+Avoid using:
+- `10.244.x.x` (POD CIDR)
+- `10.96.x.x` through `10.111.x.x` (Service CIDR)
+
+### Why This Matters
+
+While Service CIDR overlaps are only a warning, they can cause:
+1. **Routing conflicts**: Network traffic may be misdirected
+2. **Service connectivity issues**: Kubernetes services may not be reachable
+3. **Unpredictable behavior**: Network behavior may be inconsistent
+4. **Troubleshooting complexity**: Network issues become harder to diagnose
+
+It's best to avoid these overlaps from the start rather than troubleshooting them later.
+
+### Related Documentation
+
+- [AKS IP Address Planning](https://learn.microsoft.com/en-us/azure/aks/aksarc/aks-hci-ip-address-planning)
+- [Azure Local Network Requirements](https://learn.microsoft.com/en-us/azure-stack/hci/concepts/host-network-requirements)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-AddNode-NetworkATC-Service.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-AddNode-NetworkATC-Service.md
new file mode 100644
index 0000000..3c3cb87
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-AddNode-NetworkATC-Service.md
@@ -0,0 +1,256 @@
+# AzStackHci_Network_Test_Network_AddNode_NetworkATC_Service
+
+
+
+ | Name |
+ AzStackHci_Network_Test_Network_AddNode_NetworkATC_Service |
+
+
+ | Severity |
+ Critical: This validator will block operations until remediated. |
+
+
+ | Applicable Scenarios |
+ Add-Server |
+
+
+
+## Overview
+
+This validator checks that the NetworkATC feature and service are properly installed and running on the new node being added to the cluster. Network ATC is required to manage network intents on Azure Local nodes.
+
+## Requirements
+
+The new node must meet one of the following requirements:
+1. NetworkATC feature is installed AND the NetworkATC service is running, OR
+2. NetworkATC feature is available for installation (not yet installed)
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information about the NetworkATC feature and service status. The `Source` field identifies the new node.
+
+```json
+{
+ "Name": "AzStackHci_Network_Test_Network_AddNode_NetworkATC_Service",
+ "DisplayName": "Test NetworkATC service is running on new node",
+ "Title": "Test NetworkATC service is running on new node",
+ "Status": 1,
+ "Severity": 2,
+ "Description": "Check NetworkATC service is running on new node",
+ "Remediation": "https://learn.microsoft.com/azure-stack/hci/deploy/deployment-tool-checklist",
+ "TargetResourceID": "NetworkATCService",
+ "TargetResourceName": "NetworkATCService",
+ "TargetResourceType": "NetworkATCService",
+ "Timestamp": "",
+ "AdditionalData": {
+ "Source": "NODE4",
+ "Resource": "AddNodeNewNodeNetworkATCServiceCheck",
+ "Detail": "NetworkATC feature/service status: Feature Installed Service Stopped on NODE4",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: NetworkATC Service Not Running
+
+**Error Message:**
+```text
+NetworkATC feature/service status: Feature Installed Service Stopped on NODE4
+```
+
+**Root Cause:** The NetworkATC feature is installed on the node, but the NetworkATC service is not running. This prevents Network ATC from managing network intents on the node.
+
+#### Remediation Steps
+
+1. Check the NetworkATC service status on the new node:
+
+ ```powershell
+ # Run on the new node
+ Get-Service -Name NetworkATC
+ ```
+
+2. Start the NetworkATC service:
+
+ ```powershell
+ Start-Service -Name NetworkATC
+ ```
+
+3. Verify the service is running:
+
+ ```powershell
+ Get-Service -Name NetworkATC | Select-Object Name, Status, StartType
+ ```
+
+4. Ensure the service is set to start automatically:
+
+ ```powershell
+ Set-Service -Name NetworkATC -StartupType Automatic
+ ```
+
+5. Verify the service can communicate properly:
+
+ ```powershell
+ # Check NetworkATC cmdlets are working
+ Get-NetIntent -ErrorAction SilentlyContinue
+ ```
+
+6. Retry the Add-Server operation.
+
+---
+
+### Failure: NetworkATC Feature Not Installed
+
+**Error Message:**
+```text
+NetworkATC feature not installed/available on NODE4
+```
+
+**Root Cause:** The NetworkATC Windows feature is not installed and not available on the system. This should not happen on a properly prepared Azure Local node.
+
+#### Remediation Steps
+
+1. Check if the NetworkATC feature is available:
+
+ ```powershell
+ # Run on the new node
+ Get-WindowsFeature -Name NetworkATC
+ ```
+
+2. Install the NetworkATC feature:
+
+ ```powershell
+ Install-WindowsFeature -Name NetworkATC -IncludeManagementTools
+ ```
+
+3. Verify the installation:
+
+ ```powershell
+ Get-WindowsFeature -Name NetworkATC | Select-Object Name, InstallState
+ ```
+
+4. Start the NetworkATC service:
+
+ ```powershell
+ Start-Service -Name NetworkATC
+ Set-Service -Name NetworkATC -StartupType Automatic
+ ```
+
+5. Verify the service is running:
+
+ ```powershell
+ Get-Service -Name NetworkATC | Select-Object Name, Status, StartType
+ ```
+
+6. Retry the Add-Server operation.
+
+> **Note**: If the NetworkATC feature is not available at all, this may indicate:
+> - The operating system version is incorrect or incomplete
+> - Required components were not installed during OS installation
+> - The node needs to be rebuilt with the proper Azure Local OS image
+
+---
+
+## Additional Information
+
+### Understanding NetworkATC Feature States
+
+The NetworkATC Windows feature can be in three states:
+
+| InstallState | Description | Validator Result |
+|-------------|-------------|-----------------|
+| **Installed** | Feature is installed; service should be running | ✓ Pass (if service running) / ✗ Fail (if service stopped) |
+| **Available** | Feature is available but not yet installed | ✓ Pass (will be installed during Add-Server) |
+| **Removed** or **Unknown** | Feature not available on system | ✗ Fail |
+
+### NetworkATC Service Status
+
+When the feature is installed, the service must be running:
+
+```powershell
+# Check service status
+Get-Service -Name NetworkATC | Format-List Name, Status, StartType, DisplayName
+
+# Expected output:
+# Name : NetworkATC
+# Status : Running
+# StartType : Automatic
+```
+
+### Common Causes of Service Failures
+
+| Cause | Description | Resolution |
+|-------|-------------|-----------|
+| Service stopped manually | Administrator stopped the service | Start the service |
+| Service crashed | Service encountered an error and stopped | Check event logs, restart service |
+| Startup type disabled | Service set to Manual or Disabled | Set to Automatic and start |
+| OS corruption | System files corrupted | Run SFC scan or reinstall OS |
+
+### Troubleshooting Service Start Failures
+
+If the service fails to start:
+
+1. Check event logs:
+
+ ```powershell
+ # Check NetworkATC event logs
+ Get-WinEvent -LogName "Microsoft-Windows-Networking-NetworkATC/Operational" -MaxEvents 50 |
+ Where-Object { $_.TimeCreated -gt (Get-Date).AddHours(-1) } |
+ Select-Object TimeCreated, Id, LevelDisplayName, Message |
+ Format-Table -Wrap
+
+ # Check System event log
+ Get-EventLog -LogName System -Source "Service Control Manager" -Newest 20 |
+ Where-Object { $_.Message -like "*NetworkATC*" } |
+ Format-List TimeGenerated, EntryType, Message
+ ```
+
+2. Try starting with verbose error information:
+
+ ```powershell
+ # Start service and capture any errors
+ try {
+ Start-Service -Name NetworkATC -ErrorAction Stop
+ Write-Host "✓ Service started successfully" -ForegroundColor Green
+ } catch {
+ Write-Host "✗ Service failed to start" -ForegroundColor Red
+ Write-Host "Error: $($_.Exception.Message)" -ForegroundColor Red
+ }
+ ```
+
+3. Check if required modules are available:
+
+ ```powershell
+ # NetworkATC requires PowerShell modules
+ Get-Module -ListAvailable | Where-Object { $_.Name -like "*Network*" -or $_.Name -like "*ATC*" }
+ ```
+
+### Best Practices for Node Preparation
+
+Before adding a server to the cluster:
+
+1. **Verify OS installation**:
+ - Use the correct Azure Local OS image
+ - Complete all Windows updates
+ - Install required features
+
+2. **Verify NetworkATC readiness**:
+ ```powershell
+ # Node preparation checklist
+ Get-WindowsFeature -Name NetworkATC | Format-List Name, InstallState
+ Get-Service -Name NetworkATC | Format-List Name, Status, StartType
+ Get-NetIntent -ErrorAction SilentlyContinue # Should not error
+ ```
+
+3. **Ensure service auto-starts**:
+ - Set NetworkATC to Automatic startup
+ - Verify it survives a reboot
+
+### Related Documentation
+
+- [Add servers to an Azure Local cluster](https://learn.microsoft.com/azure-stack/hci/manage/add-server)
+- [Manage Network ATC](https://learn.microsoft.com/azure-stack/hci/manage/manage-network-atc)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-Cluster-Intent-Status.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-Cluster-Intent-Status.md
new file mode 100644
index 0000000..975aee7
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-Cluster-Intent-Status.md
@@ -0,0 +1,238 @@
+# AzStackHci_Network_Test_Network_Cluster_Intent_Status
+
+
+
+ | Name |
+ AzStackHci_Network_Test_Network_Cluster_Intent_Status |
+
+
+ | Severity |
+ Critical: This validator will block operations until remediated. |
+
+
+ | Applicable Scenarios |
+ Add-Server, Pre-Update |
+
+
+
+## Overview
+
+This validator checks that all network intents configured on existing cluster nodes are in a healthy state. Before adding a new server or performing an update, all network intents must have `ConfigurationStatus = Success` and `ProvisioningStatus = Completed`. The validator will wait up to 14 minutes for intents to stabilize if they are in transient states like "Validating" (which can occur during ATC drift detection).
+
+## Requirements
+
+Each network intent on active cluster nodes must meet the following requirements:
+1. `ConfigurationStatus` must be `Success`
+2. `ProvisioningStatus` must be `Completed`
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information about the intent status. The `Source` field identifies the node, and the `Resource` field shows the intent name.
+
+```json
+{
+ "Name": "AzStackHci_Network_Test_Network_Cluster_Intent_Status",
+ "DisplayName": "Test Network intent on existing cluster nodes",
+ "Title": "Test Network intent on existing cluster nodes",
+ "Status": 1,
+ "Severity": 2,
+ "Description": "Checking if network intent is healthy on existing nodes",
+ "Remediation": "To check cluster network intent status, run below cmdlet on your cluster:\n Get-NetIntentStatus\n ConfigurationStatus should be \"Success\"\n ProvisioningStatus should be \"Completed\":",
+ "TargetResourceID": "NetworkIntent",
+ "TargetResourceName": "NetworkIntent",
+ "TargetResourceType": "NetworkIntent",
+ "Timestamp": "\\/Date(timestamp)\\/",
+ "AdditionalData": {
+ "Source": "NODE1",
+ "Resource": "ManagementComputeIntent",
+ "Detail": "Intent ManagementComputeIntent on host NODE1 in a failed state. ConfigurationStatus: Failed ProvisioningStatus: Error",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: Intent in Failed State
+
+**Error Message:**
+```text
+Intent ManagementComputeIntent on host NODE1 in a failed state. ConfigurationStatus: Failed ProvisioningStatus: Error
+```
+
+**Root Cause:** The network intent has failed to provision successfully. This can occur due to configuration errors, hardware issues, driver problems, or conflicts with existing network settings.
+
+#### Remediation Steps
+
+##### Step 1: Check Intent Status Details
+
+1. Get detailed status information for the failing intent:
+
+ ```powershell
+ # Check intent status on all nodes
+ Get-NetIntentStatus
+ ```
+
+2. You should able to find the configuration status and provisioning status from above output
+
+3. Check the intent configuration:
+
+ ```powershell
+ # Get intent details
+ Get-NetIntent -Name "ManagementComputeIntent" | Format-List
+ ```
+
+##### Step 2: Check Network Adapter Status
+
+Network intent failures are often related to adapter issues:
+
+1. Verify all adapters in the intent exist and are operational:
+
+ ```powershell
+ # Get the intent's adapters
+ $intent = Get-NetIntent -Name ""
+ $adapterNames = $intent.NetAdapterNamesAsList
+
+ # Check adapter status on the failing node
+ Get-NetAdapter -Name $adapterNames -ErrorAction SilentlyContinue
+ ```
+
+##### Step 3: Common Fixes for Intent Failures
+
+###### Scenario A: Adapter Not Found or Down
+
+If an adapter doesn't exist or is down:
+
+1. Check physical connectivity:
+ - Verify cables are connected
+ - Check switch port status
+ - Verify adapter is enabled in BIOS
+
+2. Enable the adapter if it's disabled:
+
+ ```powershell
+ # On the failing node
+ Enable-NetAdapter -Name "Ethernet" # Replace with adapter name
+ ```
+
+###### Scenario B: Driver Issues
+
+If there are driver-related problems:
+
+1. Check driver version and status:
+
+ ```powershell
+ Get-NetAdapter -Name $adapter -ErrorAction SilentlyContinue
+ ```
+
+2. Update drivers if needed (see related TSG for adapter driver issues).
+
+###### Scenario C: VMSwitch Conflicts
+
+If there are VMSwitch configuration conflicts:
+
+1. Check for existing VMSwitches:
+
+ ```powershell
+ Get-VMSwitch
+ ```
+
+2. If there's a conflicting VMSwitch, Network ATC may need to reconcile it or you may need to remove it manually.
+
+##### Step 4: Retry the Intent
+
+If you've resolved the underlying issue, you can retry the intent provisioning:
+
+1. **Option A: Update the intent** (triggers reprovisioning):
+
+ ```powershell
+ # This will trigger ATC to retry provisioning
+ Set-NetIntentRetryState -ClusterName "" -Name "" -NodeName ""
+ ```
+
+2. **Option B: Remove and recreate the intent**:
+
+ ```powershell
+ # Get current intent configuration
+ Remove-NetIntent -ClusterName "".Name -Name ""
+
+ Add-NetIntent -Name "" ... # make sure include other parameters
+ ```
+
+3. Monitor the intent status:
+
+ ```powershell
+ # Monitor intent provisioning
+ Get-NetIntentStatus
+ ```
+
+---
+
+### Transient States
+
+The intent transient states include:
+
+- **Provisioning**: Intent is being initially provisioned
+- **Retrying**: Intent provisioning failed and is retrying
+- **Validating**: ATC is performing drift detection (occurs every 15 minutes)
+- **Pending**: Intent is queued for provisioning
+
+If an intent remains in a transient state and never back to **Completed** state, it likely indicates a problem that needs attention.
+
+---
+
+## Additional Information
+
+### Understanding Intent Status Values
+
+**ConfigurationStatus values:**
+- `Success`: Intent is properly configured
+- `Failed`: Intent configuration or provisioning failed
+- `Provisioning`: Intent is being set up (transient)
+- `Retrying`: Previous attempt failed, retrying (transient)
+- `Validating`: ATC drift detection in progress (transient)
+- `Pending`: Intent is queued (transient)
+
+**ProvisioningStatus values:**
+- `Completed`: Intent provisioning completed successfully
+- `InProgress`: Intent provisioning is in progress (transient)
+
+### Checking Intent Logs
+
+For detailed troubleshooting, check Network ATC logs:
+
+```powershell
+# Get recent Network ATC events
+Get-WinEvent -LogName "Microsoft-Windows-Networking-NetworkATC/Operational" -MaxEvents 100 |
+ Where-Object { $_.TimeCreated -gt (Get-Date).AddHours(-1) } |
+ Select-Object TimeCreated, Id, LevelDisplayName, Message |
+ Format-Table -AutoSize -Wrap
+```
+
+### Common Intent Failure Causes
+
+| Cause | Description | Resolution |
+|-------|-------------|-----------|
+| Adapter down | Network adapter is disconnected or disabled | Check physical connectivity, enable adapter |
+| Driver issues | Incompatible or faulty network driver | Update or reinstall drivers |
+| VMSwitch conflicts | Existing VMSwitch conflicts with intent | Remove conflicting VMSwitch or reconcile |
+| RDMA configuration | RDMA settings incompatible | Check RDMA configuration |
+| Resource conflicts | IP address or VLAN conflicts | Check network configuration |
+
+### Best Practices
+
+1. **Ensure all adapters are healthy** before creating intents
+2. **Use consistent driver versions** across all nodes
+3. **Document intent configuration** for troubleshooting
+4. **Monitor intent status** regularly
+5. **Allow time for provisioning** before making changes
+6. **Check logs** for detailed error information
+
+### Related Documentation
+
+- [Network ATC overview](https://learn.microsoft.com/en-us/azure/azure-local/concepts/network-atc-overview)
+- [Manage network intents](https://learn.microsoft.com/en-us/windows-server/networking/network-atc/manage-network-atc)
+- [Host network requirements](https://learn.microsoft.com/azure-stack/hci/concepts/host-network-requirements)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-Cluster-MgmtIntent-Exists.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-Cluster-MgmtIntent-Exists.md
new file mode 100644
index 0000000..aea7b33
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-Cluster-MgmtIntent-Exists.md
@@ -0,0 +1,135 @@
+# AzStackHci_Network_Test_Network_Cluster_MgmtIntent_Exists
+
+
+
+ | Name |
+ AzStackHci_Network_Test_Network_Cluster_MgmtIntent_Exists |
+
+
+ | Severity |
+ Critical: This validator will block operations until remediated. |
+
+
+ | Applicable Scenarios |
+ Add-Server, Pre-Update |
+
+
+
+## Overview
+
+This validator checks that exactly one management intent exists on the cluster. The management intent defines which network adapters are used for management traffic, and there must be exactly one management intent configured for the cluster to operate correctly.
+
+## Requirements
+
+The cluster must meet the following requirement:
+1. Exactly one network intent must have `IsManagementIntentSet` equal to `True`
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information about how many management intents were found.
+
+```json
+{
+ "Name": "AzStackHci_Network_Test_Network_Cluster_MgmtIntent_Exists",
+ "DisplayName": "Test one management intent exists on cluster",
+ "Title": "Test one management intent exists on cluster",
+ "Status": 1,
+ "Severity": 2,
+ "Description": "Checking if there is one and only one management intent on existing cluster",
+ "Remediation": "To check cluster network intent information, run below cmdlet on your cluster:\n Get-NetIntent\n Make sure one and only one intent is management intent: IsManagementIntentSet == \"True\"",
+ "TargetResourceID": "NetworkIntent",
+ "TargetResourceName": "NetworkIntent",
+ "TargetResourceType": "NetworkIntent",
+ "Timestamp": "",
+ "AdditionalData": {
+ "Source": "ClusterMgmtIntent",
+ "Resource": "ClusterMgmtIntent",
+ "Detail": "There are [ 0 ] management intent(s) on the cluster. Expecting [ 1 ]. Please check the cluster network intent configuration.",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: No Management Intent Found
+
+**Error Message:**
+```text
+There are [ 0 ] management intent(s) on the cluster. Expecting [ 1 ]. Please check the cluster network intent configuration.
+```
+
+**Root Cause:** No network intent is configured with `IsManagementIntentSet` set to `True`. This means the cluster has no defined management traffic path.
+
+#### Remediation Steps
+
+##### Configure Management Intent
+
+1. Check existing intents to see if one should be designated as the management intent:
+
+ ```powershell
+ Get-NetIntent
+ ```
+
+2. If an intent exists that should be the management intent but isn't configured correctly, you'll need to remove it and re-configure it:
+
+ ```powershell
+ # Example: Recreate an intent with management traffic
+ Remove-NetIntent -ClusterName (Get-Cluster).Name -Name $intentName
+
+ # Wait for removal
+ Start-Sleep -Seconds 10
+
+ # Create new management intent
+ Add-NetIntent -ClusterName (Get-Cluster).Name -Name $intentName `-AdapterName $adapters -Management ... # Replace make sure string $intentName and string array $adapters, and put other necessary parameters as well
+ ```
+
+3. Verify the management intent was created:
+
+ ```powershell
+ Get-NetIntent | Where-Object { $_.IsManagementIntentSet -eq $true }
+ ```
+
+4. Check the intent status:
+
+ ```powershell
+ Get-NetIntentStatus
+ ```
+
+---
+
+## Additional Information
+
+### Common Scenarios
+
+| Scenario | Configuration | Example |
+|----------|--------------|---------|
+| **Dedicated Management/Compute** | Separate adapters for management/compute + storage | Management/Compute: eth0, eth1
Storage: eth2, eth3 |
+| **Fully Converged** | Management + Compute + Storage | ConvergedIntent: eth0, eth1
(all traffic types) |
+
+Note that management intent is always a compute intent as well.
+
+### Checking Intent Configuration
+
+```powershell
+# Detailed view of all intents
+Get-NetIntent
+```
+
+### Best Practices
+
+1. **Plan network design** before creating intents
+2. **Document intent configuration** for future reference
+3. **Test intent changes** in a non-production environment first
+4. **Wait for intents to stabilize** (ConfigurationStatus = Success) before proceeding
+5. **Verify connectivity** after making intent changes
+
+### Related Documentation
+
+- [Network ATC overview](https://learn.microsoft.com/en-us/azure/azure-local/concepts/network-atc-overview)
+- [Manage network intents](https://learn.microsoft.com/azure-stack/hci/manage/manage-network-atc)
+- [Host network requirements](https://learn.microsoft.com/azure-stack/hci/concepts/host-network-requirements)
+- [Add servers to an Azure Local cluster](https://learn.microsoft.com/azure-stack/hci/manage/add-server)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-Cluster-StorageIntent-Exists.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-Cluster-StorageIntent-Exists.md
new file mode 100644
index 0000000..cc08008
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-Cluster-StorageIntent-Exists.md
@@ -0,0 +1,225 @@
+# AzStackHci_Network_Test_Network_Cluster_StorageIntentExistence
+
+
+
+ | Name |
+ AzStackHci_Network_Test_Network_Cluster_StorageIntentExistence |
+
+
+ | Severity |
+ Critical: This validator will block operations until remediated. |
+
+
+ | Applicable Scenarios |
+ Add-Server |
+
+
+
+## Overview
+
+This validator checks that a storage intent exists on the cluster before adding a new server. Storage intents are required for 2+ node Azure Local clusters to ensure proper storage network configuration. Without a storage intent, the new server cannot be added successfully.
+
+## Requirements
+
+The cluster must meet the following requirement:
+1. At least one network intent must have `IsStorageIntentSet` equal to `True`
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information about whether a storage intent was found. The `Source` field identifies the cluster name.
+
+```json
+{
+ "Name": "AzStackHci_Network_Test_Network_Cluster_StorageIntentExistence",
+ "DisplayName": "Test Storage intent should exists on current cluster",
+ "Title": "Test Storage intent existence",
+ "Status": 1,
+ "Severity": 2,
+ "Description": "Check if the storage intent is configured on the existing cluster",
+ "Remediation": "Storage intent is required for 2+ nodes Azure Local cluster.\nPlease run below PowerShell cmdlet to add storage intent into the cluster:\n Add-NetIntent\nCheck https://learn.microsoft.com/en-us/azure/azure-local/ for more information!",
+ "TargetResourceID": "StorageIntent",
+ "TargetResourceName": "StorageIntent",
+ "TargetResourceType": "StorageIntent",
+ "Timestamp": "",
+ "AdditionalData": {
+ "Source": "MyCluster",
+ "Resource": "AddNodeStorageIntentCheck",
+ "Detail": "Storage Intent is not configured on the cluster MyCluster.",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: Storage Intent Not Configured
+
+**Error Message:**
+```text
+Storage Intent is not configured on the cluster MyCluster.
+```
+
+**Root Cause:** The cluster does not have a storage intent configured. Storage intents are required for 2+ node clusters to define which network adapters are used for storage traffic (SMB, iSCSI, or RDMA traffic for Storage Spaces Direct).
+
+#### Remediation Steps
+
+##### Step 1: Verify Current Intent Configuration
+
+1. Check all existing intents on the cluster:
+
+ ```powershell
+ # List all intents
+ Get-NetIntent | Select-Object IntentName, IsStorageIntentSet, TrafficType, Adapter
+ ```
+
+2. Check if any intent has storage traffic type:
+
+ ```powershell
+ # Check for storage traffic in any intent
+ Get-NetIntent | Where-Object { $_.TrafficType -contains "Storage" } |
+ Select-Object IntentName, TrafficType, Adapter
+ ```
+
+3. Verify intent status across nodes:
+
+ ```powershell
+ # Check intent status on all cluster nodes
+ Get-NetIntentStatus | Select-Object Host, IntentName, IsStorageIntentSet |
+ Format-Table -AutoSize
+ ```
+
+##### Step 2: Determine Storage Network Design
+
+Before adding a storage intent, determine your storage network design:
+
+**Option A: Dedicated Storage Network**
+- Separate physical adapters dedicated to storage traffic
+- Recommended for high-performance workloads
+- Example: eth2, eth3 for storage only
+
+**Option B: Converged Network**
+- Storage traffic shares adapters with other traffic types
+- Recommended for most deployments
+- Example: eth0, eth1 for management, compute, and storage
+
+##### Step 3: Add Storage Intent
+
+Choose the appropriate option based on your network design:
+
+###### Option A: Add Dedicated Storage Intent
+
+If you have dedicated storage adapters:
+
+1. Identify the storage adapters on each node:
+
+ ```powershell
+ # Check adapters on all cluster nodes
+ Get-NetAdapter | Where-Object { $_.Status -eq "Up" }
+ ```
+
+2. Make sure you have a dedicated storage intent:
+
+ If you accidentally removed the storage intent that was provisioned during the Azure Local deployment time, or your current Azure Local cluster is a single node cluster, you could add it back using **Set-StorageNetworkIntent** cmdlet.
+
+###### Option B: Add Storage to Existing Intent (Converged)
+
+If storage should share adapters with other traffic, you will need to remove current intent and re-create the intent with adding an additional "-Storage" option. Your system might experience disconnection if you are removing the management intent.
+
+##### Step 4: Verify Storage Intent Creation
+
+1. Confirm the storage intent exists:
+
+ ```powershell
+ # Check for storage intent
+ Get-NetIntent | Where-Object { $_.IsStorageIntentSet -eq $true }
+ ```
+
+2. Monitor intent provisioning:
+
+ ```powershell
+ Get-NetIntentStatus
+ ```
+
+3. Verify storage network connectivity:
+
+ ```powershell
+ # Check storage adapter IP
+ ipconfig /all
+ # ping the storage IP to each other
+ ping "" -S ""
+
+ ```
+
+##### Step 5: Retry Add-Server Operation
+
+After successfully creating the storage intent and verifying it's healthy, retry the Add-Server operation.
+
+---
+
+## Additional Information
+
+### Why Storage Intent is Required
+
+For 2+ node Azure Local clusters, storage intents are required to:
+1. **Define storage network paths** - Specifies which adapters handle storage traffic
+2. **Enable Storage Spaces Direct** - Required for distributed storage
+3. **Optimize storage performance** - Ensures proper QoS and traffic isolation
+4. **Enable RDMA** - Configures RDMA for high-performance storage
+
+### Common Storage Network Designs
+
+| Design | Description | Traffic Types | Adapters |
+|--------|-------------|---------------|----------|
+| **Dedicated Storage** | Separate adapters for storage | Storage only | eth2, eth3 |
+| **Fully Converged** | All traffic on same adapters | Management, Compute, Storage | eth0, eth1 |
+
+### Checking Storage Intent Configuration
+
+```powershell
+# View all storage-related configuration
+Get-NetIntent
+
+# Check storage adapter IPs
+Get-NetIPAddress
+# or
+ipconfig /all
+```
+
+### Single-Node Clusters
+
+**Note**: Single-node clusters do not require a storage intent. This validator only applies when adding a server to an existing cluster (making it 2+ nodes).
+
+### Storage Network Requirements
+
+When creating a storage intent, ensure:
+1. **Adapters support RDMA** (if using RDMA)
+2. **Sufficient bandwidth** (10Gbps+ recommended)
+3. **Dedicated VLAN**
+4. **Consistent configuration** across all nodes
+5. **IP addressing planned** for storage network
+
+### Troubleshooting Intent Provisioning
+
+If the intent fails to provision:
+1. Check adapter status (see Cluster Intent Status TSG)
+2. Verify driver compatibility
+3. Check for VMSwitch conflicts
+4. Review Network ATC logs
+
+### Best Practices
+
+1. **Plan storage network design** before creating intents
+2. **Use RDMA-capable adapters** for best performance
+3. **Separate storage traffic** from other traffic when possible
+4. **Document network configuration** for future reference
+5. **Test storage connectivity** after creating intent
+6. **Monitor intent status** regularly
+
+### Related Documentation
+
+- [Network ATC overview](https://learn.microsoft.com/en-us/azure/azure-local/concepts/network-atc-overview)
+- [Host network requirements](https://learn.microsoft.com/azure-stack/hci/concepts/host-network-requirements)
+- [Add servers to an Azure Local cluster](https://learn.microsoft.com/azure-stack/hci/manage/add-server)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-InfraIP-Azure-Endpoint-Connection.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-InfraIP-Azure-Endpoint-Connection.md
new file mode 100644
index 0000000..fc5d9ba
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-InfraIP-Azure-Endpoint-Connection.md
@@ -0,0 +1,412 @@
+# AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_{ServiceName}
+
+
+
+ | Name |
+ AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_{ServiceName} |
+
+
+ | Severity |
+ Critical or Warning: Severity varies by service. Most are Critical, some are Warning. |
+
+
+ | Applicable Scenarios |
+ Deployment (without ArcGateway), Upgrade (without ArcGateway) |
+
+
+
+## Overview
+
+This is a **dynamically generated validator** that tests connectivity from infrastructure pool IPs to specific Azure and Arc-enabled services endpoints. The validator name varies based on the service being tested (e.g., `AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_AzureArc`, `AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_AzureResourceManager`, etc.).
+
+The validator uses `curl.exe` to test HTTP/HTTPS connectivity from each infrastructure IP to required service endpoints, ensuring that workloads running on infrastructure IPs can reach essential Azure services.
+
+## Requirements
+
+1. Infrastructure IP must be able to reach the service endpoint via HTTP/HTTPS
+2. DNS resolution must work for the endpoint hostname
+3. Network path must allow outbound connectivity to the service
+4. Firewall rules must permit the required protocol and port
+5. If proxy is configured, it must be functional and allow the connection
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. The validator name will include the specific service being tested. Check the `AdditionalData.Detail` field for connection details.
+
+```json
+{
+ "Name": "AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_AzureArc",
+ "DisplayName": "Test outbound connection for IP in infra IP pool to Azure Arc service",
+ "Title": "Test outbound connection for IP in infra IP pool",
+ "Status": 1,
+ "Severity": 2,
+ "Description": "Test outbound connection for IP in infra IP pool to Azure Arc service endpoints",
+ "Remediation": "Make sure infra IP 10.0.0.100 could connect to public endpoint https://management.azure.com correctly. \nhttps://learn.microsoft.com/azure/azure-arc/servers/network-requirements?tabs=azure-cloud#urls",
+ "TargetResourceID": "AzureArc_Connectivity",
+ "TargetResourceName": "AzureArc_Connectivity",
+ "TargetResourceType": "AzureArc_Connectivity",
+ "Timestamp": "",
+ "AdditionalData": {
+ "Source": "SERVER01",
+ "Resource": "10.0.0.100-Ethernet",
+ "Detail": "[FAILED] Connection from 10.0.0.100 (Ethernet) to https://management.azure.com failed after 10 attempts",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: Cannot Connect to Azure Service Endpoint
+
+**Error Message:**
+```text
+[FAILED] Connection from 10.0.0.100 (Ethernet) to https://management.azure.com failed after 10 attempts
+```
+
+**Root Cause:** The infrastructure IP cannot establish connectivity to the Azure service endpoint. Possible causes:
+- Firewall blocking outbound HTTPS traffic
+- Network path not allowing connectivity to Azure
+- Proxy configuration issues (if proxy is used)
+- DNS resolution failing for the endpoint
+- Service endpoint unreachable or blocked by network policy
+
+#### Remediation Steps
+
+##### 1. Identify the Failing Service Endpoint
+
+Check the validator name and error message to identify which service is failing:
+
+```powershell
+# Common service endpoints tested:
+# - Azure Arc: management.azure.com, login.microsoftonline.com
+# - Azure Resource Manager: management.azure.com
+# - Azure Identity: login.microsoftonline.com, login.windows.net
+# - Azure Storage: *.blob.core.windows.net
+# - Azure Key Vault: *.vault.azure.net
+# - Azure Monitor/Telemetry: *.ods.opinsights.azure.com
+
+# The specific endpoint will be in the error detail
+```
+
+##### 2. Test DNS Resolution
+
+Verify the endpoint hostname can be resolved:
+
+```powershell
+# Example: Test Azure Resource Manager endpoint
+$endpoint = "management.azure.com" # Replace with your failing endpoint
+
+# Test DNS resolution
+Resolve-DnsName $endpoint
+
+# If resolution fails, check DNS servers and connectivity
+# See: Troubleshoot-Network-Test-InfraIP-DNS-Client-Readiness.md
+```
+
+##### 3. Test Basic Connectivity from Management IP
+
+First verify connectivity works from the management IP:
+
+```powershell
+# Test from management IP using Test-NetConnection
+$endpoint = "management.azure.com"
+Test-NetConnection -ComputerName $endpoint -Port 443 -InformationLevel Detailed
+
+# Test using Invoke-WebRequest
+try {
+ $response = Invoke-WebRequest -Uri "https://$endpoint" -UseBasicParsing -TimeoutSec 10
+ Write-Host "✓ Connection successful from management IP" -ForegroundColor Green
+} catch {
+ Write-Host "✗ Connection failed from management IP" -ForegroundColor Red
+ Write-Host " Error: $($_.Exception.Message)" -ForegroundColor Yellow
+}
+```
+
+If management IP can't connect either, the issue is with the enterprise's network configuration, not specific to infrastructure IPs.
+
+##### 4. Check Firewall Rules
+
+Verify outbound HTTPS/HTTP traffic is allowed:
+
+```powershell
+# Check Windows Firewall for outbound rules
+Get-NetFirewallRule -Direction Outbound -Enabled True |
+ Where-Object { $_.DisplayName -like "*HTTP*" -or $_.DisplayName -like "*Web*" } |
+ Select-Object DisplayName, Action, Enabled |
+ Format-Table -AutoSize
+
+# Check if outbound connections are blocked by default
+Get-NetFirewallProfile | Select-Object Name, DefaultOutboundAction
+
+# Most corporate environments allow outbound HTTPS (port 443)
+# Check with network team if specific Azure IPs/domains need to be allowed
+```
+
+##### 5. Test with curl.exe (Same Tool as Validator)
+
+Use curl.exe to test exactly as the validator does:
+
+```powershell
+# Test from any IP on the system (management IP)
+$endpoint = "https://management.azure.com"
+$curlCommand = "curl.exe -sS --connect-timeout 15 --max-time 20 `"$endpoint`" 2>&1"
+
+Write-Host "Running: $curlCommand" -ForegroundColor Cyan
+$result = Invoke-Expression $curlCommand
+
+if ($LASTEXITCODE -eq 0) {
+ Write-Host "✓ curl.exe succeeded" -ForegroundColor Green
+ Write-Host "Response (first 200 chars): $($result[0..200] -join '')" -ForegroundColor White
+} else {
+ Write-Host "✗ curl.exe failed with exit code: $LASTEXITCODE" -ForegroundColor Red
+ Write-Host "Error: $result" -ForegroundColor Yellow
+}
+```
+
+**Common curl.exe exit codes:**
+- `0` - Success
+- `6` - Couldn't resolve host (DNS issue)
+- `7` - Failed to connect (network issue)
+- `28` - Timeout
+- `35` - SSL/TLS handshake failed
+- `60` - SSL certificate problem
+
+##### 6. Check Proxy Configuration
+
+**If your environment uses a proxy**
+- Ensure proxy allows traffic to Azure endpoints
+- Verify proxy authentication is working
+- Check proxy can resolve Azure endpoint names
+- Some proxies block non-standard ports (anything other than 80/443)
+
+##### 7. Verify Azure Firewall Requirements
+
+Ensure all required Azure endpoints are accessible. See the Azure Local firewall requirements documentation.
+
+**Core required endpoints (examples):**
+- `management.azure.com` - Azure Resource Manager
+- `login.microsoftonline.com` - Azure AD authentication
+- `*.blob.core.windows.net` - Azure Storage
+- `*.servicebus.windows.net` - Azure Service Bus
+- `*.vault.azure.net` - Azure Key Vault
+
+**Check endpoint access:**
+```powershell
+$requiredEndpoints = @(
+ "management.azure.com",
+ "login.microsoftonline.com",
+ "login.windows.net",
+ "graph.windows.net",
+ "*.blob.core.windows.net", # Note: wildcards need actual hostname
+ "*.servicebus.windows.net"
+)
+
+foreach ($endpoint in $requiredEndpoints) {
+ if ($endpoint -like "*`**") {
+ Write-Host "Wildcard endpoint: $endpoint (test with actual hostname)" -ForegroundColor Yellow
+ continue
+ }
+
+ Write-Host "`nTesting: $endpoint" -ForegroundColor Cyan
+ $test = Test-NetConnection -ComputerName $endpoint -Port 443 -InformationLevel Quiet
+ if ($test) {
+ Write-Host " ✓ Port 443 accessible" -ForegroundColor Green
+ } else {
+ Write-Host " ✗ Port 443 NOT accessible" -ForegroundColor Red
+ }
+}
+```
+
+##### 8. Check Network Routing
+
+Verify routing to Azure public IPs:
+
+```powershell
+# Check default route
+Get-NetRoute -DestinationPrefix "0.0.0.0/0" | Format-Table DestinationPrefix, NextHop, InterfaceAlias -AutoSize
+
+# Trace route to Azure endpoint (from management IP)
+Test-NetConnection -ComputerName management.azure.com -TraceRoute |
+ Select-Object -ExpandProperty TraceRoute
+```
+
+Ensure:
+- Default route exists and points to correct gateway
+- Gateway has internet connectivity
+- No routing policies block Azure IP ranges
+
+##### 9. Review Service-Specific Documentation
+
+Each Azure service may have specific requirements:
+
+**Azure Arc:**
+- See: [Azure Arc network requirements](https://learn.microsoft.com/azure/azure-arc/servers/network-requirements)
+- Requires connectivity to multiple endpoints
+- Uses Azure Resource Manager, Azure AD, and Arc-specific endpoints
+
+**Azure Resource Manager:**
+- Primary endpoint: `management.azure.com`
+- Used for all ARM operations
+
+**Azure Storage:**
+- Wildcard endpoints: `*.blob.core.windows.net`, `*.table.core.windows.net`
+- May need specific storage account names
+
+**Azure Key Vault:**
+- Wildcard: `*.vault.azure.net`
+- May need specific vault names
+
+##### 10. Test from Infrastructure IP (Advanced)
+
+To test exactly as the validator does, you could need to temporarily assign an infrastructure IP and test the connection using that IP
+
+> **Warning:** This requires network reconfiguration and may disrupt connectivity. Only perform in a test environment or during maintenance window.
+
+##### 11. Temporary Workarounds
+
+If you cannot immediately fix connectivity:
+
+**Option 1: Use ArcGateway (if applicable)**
+- ArcGateway provides an alternative connectivity method
+- When enabled, infrastructure IP connectivity tests are skipped
+- Check if your scenario supports ArcGateway
+
+**Option 2: Adjust firewall rules temporarily**
+- Work with network team to allow required Azure endpoints
+- Document which endpoints are blocked
+- Plan permanent solution
+
+##### 12. Retry the Validation
+
+After fixing connectivity issues, re-run the Environment Validator.
+
+---
+
+## Additional Information
+
+### How Endpoint Connectivity Testing Works
+
+For each infrastructure IP (up to 9 IPs tested):
+
+1. **Prerequisites validated** (Hyper-V, vSwitch, DNS, etc.)
+2. **IP assigned** to temporary test vNIC
+3. **Gateway tested** (ICMP ping)
+4. **DNS tested** (port 53 TCP)
+5. **For each required Azure endpoint:**
+ - curl.exe tests connection using `--interface `
+ - Tests both GET and HEADER requests
+ - Retries up to 10 times (default)
+ - Creates a result object with status
+
+### curl.exe Command Format
+
+The validator uses curl.exe with these parameters:
+
+```bash
+curl.exe -sS \
+ --connect-timeout 15 \
+ --max-time 20 \
+ "https://management.azure.com" \
+ --interface 10.0.0.100 \
+ 2>&1
+```
+
+Parameters:
+- `-sS` - Silent with errors shown
+- `--connect-timeout` - TCP connection timeout
+- `--max-time` - Maximum total time
+- `--interface` - Source IP to use
+- `2>&1` - Redirect stderr to stdout
+
+### Service List Source
+
+The list of services to test comes from [Azure Local Endpoints Definition Manifest](https://aka.ms/hciconnectivitytargets).
+
+### Validator Naming Pattern
+
+Validator names follow this pattern:
+```
+AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_{ServiceName}
+```
+
+**Examples:**
+- `AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_AzureArc`
+- `AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_AzureResourceManager`
+- `AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_AzureStorage`
+
+### Severity Levels
+
+Most endpoint validators use **CRITICAL** severity, but some use **WARNING**:
+
+| Severity | When Used | Impact |
+|----------|-----------|--------|
+| **CRITICAL** | Core services (Arc, ARM, AAD) | Blocks deployment if fails |
+| **WARNING** | Optional services, telemetry | Doesn't block deployment |
+
+The severity is defined for each service in the manifest file.
+
+### Common Failure Scenarios
+
+| Scenario | Error | Solution |
+|----------|-------|----------|
+| **DNS failure** | "Couldn't resolve host" | Fix DNS configuration |
+| **Firewall block** | Connection timeout | Allow outbound HTTPS |
+| **Proxy issue** | SSL/certificate error | Check proxy SSL interception |
+| **Network down** | "Failed to connect" | Check network infrastructure |
+| **Service outage** | HTTP 503 errors | Check Azure service status |
+| **Certificate error** | SSL handshake failed | Check system certificates |
+
+### Prerequisites for This Validator
+
+These validators must pass first (in order):
+1. Hyper-V Readiness
+2. VMSwitch Readiness
+3. Management vNIC Readiness
+4. Test vNIC Readiness
+5. DNS Client Readiness
+6. **IP Readiness** - Infrastructure IP reaches gateway
+7. **DNS Port 53** - Infrastructure IP reaches DNS servers
+
+Only after all prerequisites pass will endpoint connectivity tests run.
+
+### When This Validator is Skipped
+
+The infrastructure IP connectivity validator (including all endpoint tests) is **skipped** when:
+
+| Condition | Reason |
+|-----------|--------|
+| **ArcGateway enabled** | ArcGateway provides alternative connectivity |
+| **Deployment scenario** | Only runs if ArcGateway is NOT enabled |
+| **Upgrade scenario** | Only runs if ArcGateway is NOT enabled |
+
+### Verifying Azure Service Health
+
+If endpoints are failing, check Azure service health:
+
+```powershell
+# Check Azure status page
+Start-Process "https://status.azure.com"
+
+# From portal: portal.azure.com -> Service Health
+```
+
+### Related Validators
+
+**Prerequisites:**
+- AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_Hyper_V_Readiness
+- AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_VMSwitch_Readiness
+- AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_MANAGEMENT_VNIC_Readiness
+- AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_vNIC_Readiness
+- AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_DNSClientServerAddress_Readiness
+- AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_IPReadiness
+- AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_DNS_Server_Port_53
+
+### Related Documentation
+- [Azure Local firewall requirements](https://learn.microsoft.com/azure/azure-local/concepts/firewall-requirements)
+- [Azure Arc network requirements](https://learn.microsoft.com/azure/azure-arc/servers/network-requirements)
+- [Azure Local network requirements](https://learn.microsoft.com/azure/azure-local/concepts/host-network-requirements)
+- [Configure proxy settings](https://learn.microsoft.com/azure/azure-local/manage/configure-proxy-settings-23h2)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-InfraIP-DNS-Client-Readiness.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-InfraIP-DNS-Client-Readiness.md
new file mode 100644
index 0000000..2386a42
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-InfraIP-DNS-Client-Readiness.md
@@ -0,0 +1,282 @@
+# AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_DNSClientServerAddress_Readiness
+
+
+
+ | Name |
+ AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_DNSClientServerAddress_Readiness |
+
+
+ | Severity |
+ Critical: This validator will block operations until remediated. |
+
+
+ | Applicable Scenarios |
+ Deployment (without ArcGateway), Upgrade (without ArcGateway) |
+
+
+
+## Overview
+
+This validator checks that DNS client server addresses are properly configured on the management adapter. DNS servers are required for the validator to test infrastructure IP connectivity to public Azure endpoints.
+
+## Requirements
+
+1. DNS client server addresses must be configured on the management adapter
+2. At least one DNS server IP must be available
+3. DNS servers must be accessible for connectivity testing
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information.
+
+```json
+{
+ "Name": "AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_DNSClientServerAddress_Readiness",
+ "DisplayName": "Test DNS client server addresses readiness for all IP in infra IP pool",
+ "Title": "Test DNS client server addresses readiness for all IP in infra IP pool",
+ "Status": 1,
+ "Severity": 2,
+ "Description": "Test DNS client server addresses readiness for all IP in infra IP pool",
+ "Remediation": "Set DNS client server address correctly on management adapter [ vManagement(ManagementIntent) ] on SERVER01. Check it using Get-DnsClientServerAddress",
+ "TargetResourceID": "Infra_IP_Connection_DNSClientReadiness",
+ "TargetResourceName": "Infra_IP_Connection_DNSClientReadiness",
+ "TargetResourceType": "Infra_IP_Connection_DNSClientReadiness",
+ "Timestamp": "",
+ "AdditionalData": {
+ "Source": "SERVER01",
+ "Resource": "DNSClientReadiness",
+ "Detail": "[FAILED] Cannot find correctly DNS client server address on host SERVER01.",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: DNS Client Server Address Not Found
+
+**Error Message:**
+```text
+[FAILED] Cannot find correctly DNS client server address on host SERVER01.
+```
+
+**Root Cause:** No DNS server addresses are configured on the management adapter, or the validator cannot retrieve them. DNS servers are required to resolve public endpoint names during infrastructure IP connectivity testing.
+
+#### Remediation Steps
+
+##### 1. Check Current DNS Configuration
+
+Verify DNS settings on management adapters:
+
+```powershell
+# Check DNS servers on all adapters
+Get-DnsClientServerAddress -AddressFamily IPv4 |
+ Where-Object { $_.ServerAddresses.Count -gt 0 } |
+ Format-Table InterfaceAlias, ServerAddresses -AutoSize
+
+# Check specifically on management adapter
+$mgmtAdapter = Get-NetAdapter -Name "myAdapter" # Replace with your actual adapter name in the system
+
+if ($mgmtAdapter) {
+ Write-Host "Management Adapter: $($mgmtAdapter.Name)" -ForegroundColor Cyan
+ $dnsServers = Get-DnsClientServerAddress -InterfaceAlias $mgmtAdapter.Name -AddressFamily IPv4
+ Write-Host "DNS Servers: $($dnsServers.ServerAddresses -join ', ')" -ForegroundColor White
+} else {
+ Write-Host "No management adapter found" -ForegroundColor Red
+}
+```
+
+##### 2. Configure DNS Servers
+
+Set DNS server addresses on the management adapter:
+
+```powershell
+# Set DNS servers on management adapter
+$mgmtAdapter = Get-NetAdapter | Where-Object { $_.Name -like "*vManagement*" } | Select-Object -First 1
+
+if ($mgmtAdapter) {
+ # Example: Set DNS servers (replace with your actual DNS server IPs)
+ $dnsServers = @("192.168.1.100", "192.168.1.101") # Replace with your DNS servers
+
+ Set-DnsClientServerAddress -InterfaceAlias $mgmtAdapter.Name -ServerAddresses $dnsServers
+
+ # Verify configuration
+ Get-DnsClientServerAddress -InterfaceAlias $mgmtAdapter.Name -AddressFamily IPv4 |
+ Select-Object InterfaceAlias, ServerAddresses
+
+ Write-Host "✓ DNS servers configured successfully" -ForegroundColor Green
+} else {
+ Write-Host "✗ Management adapter not found" -ForegroundColor Red
+}
+```
+##### 3. Test DNS Resolution
+
+Verify DNS is working:
+
+```powershell
+# Test DNS resolution
+$testDomains = @("microsoft.com", "azure.com", "portal.azure.com")
+
+foreach ($domain in $testDomains) {
+ try {
+ $result = Resolve-DnsName $domain -ErrorAction Stop
+ Write-Host "✓ Successfully resolved $domain" -ForegroundColor Green
+ Write-Host " IP: $($result[0].IPAddress)" -ForegroundColor White
+ } catch {
+ Write-Host "✗ Failed to resolve $domain" -ForegroundColor Red
+ }
+}
+```
+
+##### 4. Check DNS Server Connectivity
+
+Verify the configured DNS servers are reachable:
+
+```powershell
+$mgmtAdapter = Get-NetAdapter | Where-Object { $_.Name -like "*vManagement*" } | Select-Object -First 1
+$dnsConfig = Get-DnsClientServerAddress -InterfaceAlias $mgmtAdapter.Name -AddressFamily IPv4
+
+foreach ($dnsServer in $dnsConfig.ServerAddresses) {
+ Write-Host "`nTesting DNS server: $dnsServer" -ForegroundColor Cyan
+
+ # Test ping
+ $ping = Test-Connection -ComputerName $dnsServer -Count 2 -Quiet
+ if ($ping) {
+ Write-Host " ✓ Ping successful" -ForegroundColor Green
+ } else {
+ Write-Host " ✗ Ping failed" -ForegroundColor Red
+ }
+
+ # Test port 53 (DNS)
+ $tcpTest = Test-NetConnection -ComputerName $dnsServer -Port 53 -InformationLevel Quiet
+ if ($tcpTest) {
+ Write-Host " ✓ Port 53 (DNS) accessible" -ForegroundColor Green
+ } else {
+ Write-Host " ✗ Port 53 (DNS) not accessible" -ForegroundColor Red
+ }
+}
+```
+
+##### 5. Check Firewall Rules
+
+Ensure firewall is not blocking DNS:
+
+```powershell
+# Check DNS Client firewall rule
+Get-NetFirewallRule -DisplayName "*DNS*" | Where-Object { $_.Enabled -eq $true } |
+ Select-Object DisplayName, Direction, Action, Enabled |
+ Format-Table -AutoSize
+
+# Ensure DNS Client rule is enabled
+Enable-NetFirewallRule -DisplayGroup "Network Discovery"
+```
+
+##### 6. Retry the Validation
+
+After configuring DNS servers, re-run the Environment Validator.
+
+---
+
+## Additional Information
+
+### Why DNS is Required
+
+The infrastructure IP connectivity validator needs DNS to:
+
+1. **Resolve public Azure endpoint names** - Endpoints like `management.azure.com`, `login.microsoftonline.com`
+2. **Test name resolution** - Validates infrastructure IPs can reach DNS servers
+3. **Verify end-to-end connectivity** - From infrastructure IP → DNS → Public endpoint
+
+### DNS Server Requirements
+
+For Azure Local deployments:
+
+| Deployment Type | DNS Requirements |
+|----------------|------------------|
+| **Static IP** | Must manually configure DNS servers on management adapter |
+| **DHCP** | DNS servers should be provided by DHCP server |
+| **Domain-joined** | Use domain DNS servers (typically domain controllers) |
+| **Workgroup** | Use corporate DNS servers or public DNS (e.g., 8.8.8.8) |
+
+### Recommended DNS Server Configuration
+
+**For production deployments:**
+- Use at least 2 DNS servers for redundancy
+- DNS servers should be internal corporate DNS that can resolve public names
+- DNS servers must be reachable from infrastructure IP pool
+
+**Example configurations:**
+```powershell
+# Domain-joined (recommended)
+$dnsServers = @("10.0.0.1", "10.0.0.2") # Domain controllers
+
+# Workgroup with corporate DNS
+$dnsServers = @("192.168.1.10", "192.168.1.11") # Corporate DNS servers
+
+# Testing only (not recommended for production)
+$dnsServers = @("8.8.8.8", "8.8.4.4") # Google Public DNS
+```
+### Common DNS Configuration Issues
+
+#### Issue: DNS servers configured but not working
+
+**Solution:**
+```powershell
+# Flush DNS cache
+Clear-DnsClientCache
+
+# Reset DNS client
+Restart-Service Dnscache
+
+# Test again
+Resolve-DnsName microsoft.com
+```
+
+#### Issue: DNS servers not accessible from infrastructure IPs
+
+**Solution:**
+- Ensure DNS servers are on a routable network from infrastructure IP subnet
+- Check firewall rules between infrastructure IP pool and DNS servers
+- Verify routing tables allow traffic to DNS servers
+- Test connectivity using `Test-NetConnection -ComputerName -Port 53`
+
+### DNS and Infrastructure IP Pool
+
+The infrastructure IP pool must be able to reach DNS servers:
+
+```
+Infrastructure IP Pool (e.g., 10.0.0.100-10.0.0.150)
+ ↓
+ Default Gateway
+ ↓
+ DNS Servers (e.g., 192.168.1.100)
+ ↓
+ Public Internet / Azure Endpoints
+```
+
+Ensure:
+- Infrastructure IPs can reach default gateway
+- Default gateway can route to DNS servers
+- DNS servers can resolve public names
+
+### Prerequisites for This Validator
+
+This validator requires:
+- **AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_Hyper_V_Readiness** - Hyper-V installed
+- **AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_VMSwitch_Readiness** - Virtual switch exists
+- **AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_MANAGEMENT_VNIC_Readiness** - Management vNIC exists
+- **AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_vNIC_Readiness** - Test vNIC can be created
+
+### Related Validators
+
+Validators that run after this validator:
+- **AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_IPReadiness** - Tests infrastructure IP assignment
+- **AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_DNS_Server_Port_53** - Tests DNS port 53 connectivity
+
+### Related Documentation
+
+- [Azure Local host network requirements](https://learn.microsoft.com/en-us/azure/azure-local/concepts/host-network-requirements)
+- [Firewall requirements for Azure Local](https://learn.microsoft.com/en-us/azure/azure-local/concepts/firewall-requirements)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-InfraIP-DNS-Port-53.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-InfraIP-DNS-Port-53.md
new file mode 100644
index 0000000..f2e1dc7
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-InfraIP-DNS-Port-53.md
@@ -0,0 +1,375 @@
+# AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_DNS_Server_Port_53
+
+
+
+ | Name |
+ AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_DNS_Server_Port_53 |
+
+
+ | Severity |
+ Critical (or Warning if proxy is enabled): This validator will block operations until remediated, or provide a warning if proxy is configured. |
+
+
+ | Applicable Scenarios |
+ Deployment (without ArcGateway), Upgrade (without ArcGateway) |
+
+
+
+## Overview
+
+This validator tests TCP connectivity from infrastructure pool IPs to DNS servers on port 53. This ensures that services running on infrastructure IPs will be able to perform DNS name resolution for accessing Azure services and other resources.
+
+## Requirements
+
+1. Infrastructure IPs must be able to reach DNS servers on TCP port 53
+2. DNS servers must be configured and accessible
+3. Network path must allow DNS traffic from infrastructure IPs
+4. Firewall rules must permit DNS queries (TCP port 53)
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field.
+
+```json
+{
+ "Name": "AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_DNS_Server_Port_53",
+ "DisplayName": "Test DNS server port connection for all IP in infra IP pool",
+ "Title": "Test DNS server port connection for all IP in infra IP pool",
+ "Status": 1,
+ "Severity": 2,
+ "Description": "Test DNS server port connection for all IP in infra IP pool",
+ "Remediation": "Make sure infra IP 10.0.0.100 could connect to your DNS server correctly.",
+ "TargetResourceID": "Infra_IP_Connection_DNS_Connection_10.0.0.100",
+ "TargetResourceName": "Infra_IP_Connection_DNS_Connection_10.0.0.100",
+ "TargetResourceType": "Infra_IP_Connection_DNS_Connection_10.0.0.100",
+ "Timestamp": "",
+ "AdditionalData": {
+ "Source": "SERVER01",
+ "Resource": "10.0.0.100-Ethernet",
+ "Detail": "[FAILED] Connection from 10.0.0.100 (via physical adapter Ethernet) to DNS server port 53 failed after 3 attempts. DNS server used: 192.168.1.100 192.168.1.101",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: Cannot Connect to DNS Server Port 53
+
+**Error Message:**
+```text
+[FAILED] Connection from 10.0.0.100 (via physical adapter Ethernet) to DNS server port 53 failed after 3 attempts. DNS server used: 192.168.1.100 192.168.1.101
+```
+
+**Root Cause:** Infrastructure IP cannot establish TCP connection to DNS servers on port 53. Possible causes:
+- DNS servers are not reachable from infrastructure IP subnet
+- Firewall blocking DNS traffic
+- DNS servers not listening on TCP port 53
+- Network routing issues
+- DNS servers offline or misconfigured
+
+#### Remediation Steps
+
+##### 1. Verify DNS Server Configuration
+
+Check which DNS servers are being tested:
+
+```powershell
+# Get DNS servers configured on management adapter
+$mgmtAdapter = Get-NetAdapter -Name "MyAdapterName" # Replace with the actual name in the system
+$dnsServers = (Get-DnsClientServerAddress -InterfaceAlias $mgmtAdapter.Name -AddressFamily IPv4).ServerAddresses
+
+Write-Host "DNS Servers configured:" -ForegroundColor Cyan
+$dnsServers | ForEach-Object { Write-Host " $_" -ForegroundColor White }
+```
+
+##### 2. Test DNS Server Connectivity from Management IP
+
+First verify DNS works from management IP:
+
+```powershell
+# Test from management IP
+$mgmtIP = (Get-NetIPAddress -InterfaceAlias $mgmtAdapter.Name -AddressFamily IPv4 |
+ Where-Object { $_.IPAddress -notlike "169.254.*" }).IPAddress
+
+Write-Host "Testing from management IP: $mgmtIP" -ForegroundColor Cyan
+
+foreach ($dnsServer in $dnsServers) {
+ Write-Host "`nTesting DNS server: $dnsServer" -ForegroundColor Yellow
+
+ # Ping test
+ $ping = Test-Connection -ComputerName $dnsServer -Count 2 -Quiet
+ Write-Host " Ping: $(if ($ping) { '✓ Success' } else { '✗ Failed' })" -ForegroundColor $(if ($ping) { 'Green' } else { 'Red' })
+
+ # TCP Port 53 test
+ $tcpTest = Test-NetConnection -ComputerName $dnsServer -Port 53 -InformationLevel Quiet
+ Write-Host " Port 53 (TCP): $(if ($tcpTest) { '✓ Open' } else { '✗ Closed/Filtered' })" -ForegroundColor $(if ($tcpTest) { 'Green' } else { 'Red' })
+
+ # UDP Port 53 test (typical DNS)
+ # Note: Test-NetConnection doesn't support UDP well, but DNS primarily uses UDP
+ try {
+ $dnsResolve = Resolve-DnsName microsoft.com -Server $dnsServer -ErrorAction Stop
+ Write-Host " DNS Resolution: ✓ Working" -ForegroundColor Green
+ } catch {
+ Write-Host " DNS Resolution: ✗ Failed" -ForegroundColor Red
+ }
+}
+```
+
+##### 3. Check Routing from Infrastructure IP Subnet
+
+Verify routing configuration:
+
+```powershell
+# Check routing table
+Get-NetRoute | Where-Object { $_.DestinationPrefix -eq "0.0.0.0/0" } |
+ Format-Table DestinationPrefix, NextHop, InterfaceAlias, RouteMetric -AutoSize
+
+# For infrastructure IP subnet, verify gateway can reach DNS servers
+$infraGateway = "10.0.0.1" # Your infrastructure IP gateway
+$dnsServer = "192.168.1.100" # Your DNS server
+
+Write-Host "`nChecking if gateway $infraGateway can reach DNS server $dnsServer" -ForegroundColor Cyan
+# This assumes you can access the gateway - may need to check on gateway device
+```
+
+##### 4. Check Firewall Rules
+
+Verify DNS traffic is allowed:
+
+```powershell
+# Check Windows Firewall rules for DNS
+Get-NetFirewallRule -DisplayName "*DNS*" | Where-Object { $_.Enabled -eq $true } |
+ Select-Object DisplayName, Direction, Action, Enabled |
+ Format-Table -AutoSize
+
+# Check if DNS Client service is running
+Get-Service Dnscache | Select-Object Name, Status, StartType
+
+# Enable DNS client firewall rule if needed
+Enable-NetFirewallRule -DisplayGroup "Network Discovery"
+```
+
+##### 5. Verify DNS Servers Are Operational
+
+Check DNS server status:
+
+```powershell
+foreach ($dnsServer in $dnsServers) {
+ Write-Host "`nDNS Server: $dnsServer" -ForegroundColor Cyan
+
+ # Test basic connectivity
+ $reachable = Test-Connection -ComputerName $dnsServer -Count 1 -Quiet
+ if (-not $reachable) {
+ Write-Host " ✗ DNS server is not reachable" -ForegroundColor Red
+ continue
+ }
+
+ # Test DNS service is listening
+ $portOpen = Test-NetConnection -ComputerName $dnsServer -Port 53 -WarningAction SilentlyContinue
+ if ($portOpen.TcpTestSucceeded) {
+ Write-Host " ✓ DNS service is listening on port 53" -ForegroundColor Green
+ } else {
+ Write-Host " ✗ DNS service is NOT listening on port 53" -ForegroundColor Red
+ Write-Host " Check DNS server configuration and service status" -ForegroundColor Yellow
+ }
+
+ # Test actual DNS resolution
+ try {
+ $result = Resolve-DnsName -Name "microsoft.com" -Server $dnsServer -ErrorAction Stop
+ Write-Host " ✓ DNS resolution is working" -ForegroundColor Green
+ } catch {
+ Write-Host " ✗ DNS resolution failed: $($_.Exception.Message)" -ForegroundColor Red
+ }
+}
+```
+
+##### 6. Check Network Path Between Infrastructure IP and DNS
+
+Verify network connectivity path:
+
+```powershell
+# Trace route from management IP to DNS server (as a proxy for infrastructure IP path)
+$dnsServer = $dnsServers[0]
+Write-Host "Tracing route to DNS server $dnsServer..." -ForegroundColor Cyan
+Test-NetConnection -ComputerName $dnsServer -TraceRoute |
+ Select-Object -ExpandProperty TraceRoute
+```
+
+**Network path requirements:**
+- Infrastructure IP subnet → Default Gateway → DNS Server subnet
+- All routers/firewalls in path must allow DNS traffic (port 53 TCP/UDP)
+- VLANs must be properly configured and routed
+
+##### 7. Test with Manual Connection from Infrastructure IP
+
+If possible, manually test from an infrastructure IP:
+
+```powershell
+# This requires temporarily assigning an infrastructure IP to test with
+$testIP = "10.0.0.100" # Infrastructure IP
+$dnsServer = "192.168.1.100" # Your DNS server
+
+# Use Test-NetConnection with specific source if possible
+# Note: Windows doesn't directly support source IP specification in Test-NetConnection
+# The validator uses curl.exe internally with --interface parameter
+
+# Alternative: Use PowerShell TCP client
+$tcpClient = New-Object System.Net.Sockets.TcpClient
+try {
+ $tcpClient.Connect($dnsServer, 53)
+ if ($tcpClient.Connected) {
+ Write-Host "✓ Successfully connected to $dnsServer :53" -ForegroundColor Green
+ $tcpClient.Close()
+ }
+} catch {
+ Write-Host "✗ Failed to connect to $dnsServer :53" -ForegroundColor Red
+ Write-Host " Error: $($_.Exception.Message)" -ForegroundColor Yellow
+}
+```
+
+##### 8. If Using Proxy
+
+If proxy is enabled, DNS connectivity failure is downgraded to WARNING severity:
+
+**Why:** When proxy is configured, DNS resolution may happen on the proxy server rather than from the infrastructure IP directly.
+
+**Considerations:**
+- Proxy must be configured to handle DNS requests
+- Proxy must be reachable from infrastructure IPs
+- Some Azure services may require direct DNS connectivity even with proxy
+
+**Check proxy configuration:**
+```powershell
+# Check proxy settings
+Get-ItemProperty -Path "HKCU:\Software\Microsoft\Windows\CurrentVersion\Internet Settings" |
+ Select-Object ProxyEnable, ProxyServer
+
+# Check WinHTTP proxy
+netsh winhttp show proxy
+```
+
+##### 9. Retry the Validation
+
+After fixing DNS connectivity issues, re-run the Environment Validator.
+
+---
+
+## Additional Information
+
+### Why DNS Port 53 Connectivity Is Important
+
+Azure Local services running on infrastructure IPs need DNS for:
+
+1. **Azure service endpoint resolution** - Resolving names like `management.azure.com`, `login.microsoftonline.com`
+2. **Certificate validation** - Accessing CRL/OCSP endpoints for certificate validation
+3. **Service discovery** - Finding other cluster services and resources
+4. **Monitoring and telemetry** - Sending data to Azure monitoring endpoints
+
+### TCP vs UDP for DNS
+
+DNS typically uses:
+- **UDP port 53** - Primary protocol for DNS queries (fast, lightweight)
+- **TCP port 53** - Used for large responses, zone transfers, and some security features
+
+The validator tests **TCP port 53** because:
+- More reliable for connectivity testing
+- Required for DNSSEC and large responses
+- Indicates DNS server is fully operational
+
+### Retry Logic
+
+The validator attempts DNS connectivity **10 times** (configurable via retryTimes parameter) before reporting failure:
+- Each attempt tests TCP connection to port 53
+- Short delays between retries
+- Tests against all configured DNS servers
+- Stops on first successful connection
+
+### Severity Levels
+
+| Configuration | Severity | Reason |
+|--------------|----------|--------|
+| **No proxy** | CRITICAL | DNS connectivity is essential |
+| **Proxy enabled** | WARNING | DNS may be handled by proxy |
+
+### Infrastructure IP → DNS Path
+
+```
+Infrastructure IP (10.0.0.100)
+ ↓
+ vNIC on Virtual Switch
+ ↓
+ Physical Adapter (Ethernet)
+ ↓
+ Network Switch/Router
+ ↓
+ Default Gateway (10.0.0.1)
+ ↓
+ Network Infrastructure
+ ↓
+ DNS Server (192.168.1.100) Port 53
+```
+
+Each hop must allow and route DNS traffic correctly.
+
+### Common DNS Connectivity Issues
+
+| Issue | Symptoms | Solution |
+|-------|----------|----------|
+| **Firewall blocking** | Timeout on port 53 | Allow DNS in firewall rules |
+| **DNS server offline** | No response | Check DNS server status |
+| **Routing issue** | Cannot reach DNS subnet | Fix routing tables/gateway config |
+| **Wrong DNS IPs** | Connection refused | Verify DNS server IPs are correct |
+| **VLAN misconfiguration** | Intermittent failures | Check VLAN settings on switches |
+| **DNS service not running** | Port closed | Start DNS service on DNS server |
+
+### Alternative DNS Testing Tools
+
+For advanced troubleshooting:
+
+```powershell
+# nslookup with specific DNS server
+nslookup microsoft.com 192.168.1.100
+
+# Resolve-DnsName with specific server
+Resolve-DnsName -Name microsoft.com -Server 192.168.1.100 -Type A
+
+# Test DNS port with Test-NetConnection
+Test-NetConnection -ComputerName 192.168.1.100 -Port 53
+
+### DNS Server Requirements
+
+For Azure Local deployments:
+
+| DNS Server Type | Requirements |
+|----------------|--------------|
+| **Domain Controllers** | Must be reachable from infrastructure IPs |
+| **Corporate DNS** | Must resolve external/public names |
+| **Public DNS** | Use only for testing (not recommended for production) |
+| **Forwarders** | Must be configured if internal DNS does not resolve public names |
+
+
+**Best practice**: Use at least 2 DNS servers for redundancy.
+
+### Prerequisites for This Validator
+
+Requires these validators to pass first:
+- Hyper-V Readiness
+- VMSwitch Readiness
+- Management vNIC Readiness
+- Test vNIC Readiness
+- DNS Client Readiness
+- **IP Readiness** - Infrastructure IP must be configured and reach gateway
+
+### Related Validators
+
+After this validator passes, the validator tests connectivity to Azure endpoints (if DNS passed).
+
+### Related Documentation
+
+- [DNS requirements](https://learn.microsoft.com/windows-server/networking/dns/dns-top)
+- [Azure Local firewall requirements](https://learn.microsoft.com/en-us/azure/azure-local/concepts/firewall-requirements)
+- [Network requirements](https://learn.microsoft.com/en-us/azure/azure-local/concepts/host-network-requirements)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-InfraIP-Hyper-V-Readiness.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-InfraIP-Hyper-V-Readiness.md
new file mode 100644
index 0000000..08c0b96
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-InfraIP-Hyper-V-Readiness.md
@@ -0,0 +1,291 @@
+# AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_Hyper_V_Readiness
+
+
+
+ | Name |
+ AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_Hyper_V_Readiness |
+
+
+ | Severity |
+ Critical: This validator will block operations until remediated. |
+
+
+ | Applicable Scenarios |
+ Deployment (without ArcGateway), Upgrade (without ArcGateway) |
+
+
+
+## Overview
+
+This validator checks that the Hyper-V role is installed and available on the host. Hyper-V is required to test infrastructure IP pool connectivity because the validator need to create a temporary virtual switch and virtual network adapter to validate that infrastructure IPs can reach DNS servers and required endpoints.
+
+## Requirements
+
+1. Hyper-V role must be installed on the host
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information about Hyper-V readiness.
+
+```json
+{
+ "Name": "AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_Hyper_V_Readiness",
+ "DisplayName": "Test Hyper-V readiness for all IP in infra IP pool",
+ "Title": "Test Hyper-V readiness for all IP in infra IP pool",
+ "Status": 1,
+ "Severity": 2,
+ "Description": "Test Hyper-V readiness for all IP in infra IP pool",
+ "Remediation": "Make sure that Hyper-V is installed on host SERVER01 and rerun the validation.",
+ "TargetResourceID": "Infra_IP_Connection_HyperVReadiness",
+ "TargetResourceName": "Infra_IP_Connection_HyperVReadiness",
+ "TargetResourceType": "Infra_IP_Connection_HyperVReadiness",
+ "Timestamp": "",
+ "AdditionalData": {
+ "Source": "SERVER01",
+ "Resource": "HyperVReadiness",
+ "Detail": "[FAILED] Cannot test connection for infra IP without Hyper-V on host SERVER01.",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: Hyper-V Not Installed
+
+**Error Message:**
+```text
+[FAILED] Cannot test connection for infra IP without Hyper-V on host SERVER01.
+```
+
+**Root Cause:** The Hyper-V role is not installed on the host. The infrastructure IP connectivity validator requires Hyper-V to create a temporary virtual switch and virtual adapter for testing network connectivity from infrastructure IPs.
+
+#### Remediation Steps
+
+##### 1. Check Current Hyper-V Status
+
+Check if Hyper-V is installed:
+
+```powershell
+# Check Hyper-V Windows feature status
+Get-WindowsFeature -Name "Hyper-V"
+
+# Check if Hyper-V cmdlets are available
+Get-Command Get-VMSwitch -ErrorAction SilentlyContinue
+```
+
+**Expected output if installed:**
+```
+Display Name Name Install State
+----------------- ---- -------------
+[X] Hyper-V Hyper-V Installed
+```
+
+##### 2. Install Hyper-V Role
+
+If Hyper-V is not installed, install it:
+
+```powershell
+# Install Hyper-V role
+Install-WindowsFeature -Name Hyper-V -IncludeManagementTools -Restart
+
+# Note: A reboot is required after installation
+```
+
+**Alternative method using DISM:**
+```powershell
+# Enable Hyper-V using DISM
+Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All -NoRestart
+
+# Install Hyper-V management tools
+Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-Management-PowerShell -All -NoRestart
+
+# Reboot the system
+Restart-Computer -Force
+```
+
+##### 3. Verify Installation
+
+After the reboot, verify Hyper-V is properly installed:
+
+```powershell
+# Verify Hyper-V feature is installed
+Get-WindowsFeature -Name "Hyper-V" | Select-Object DisplayName, InstallState
+
+# Verify Hyper-V services are running
+Get-Service -Name vmms | Select-Object Name, Status, StartType
+
+# Verify Hyper-V cmdlets are available
+Get-Command Get-VMSwitch, New-VMSwitch, Get-VMNetworkAdapter
+```
+
+**Expected services status:**
+```
+Name Status StartType
+---- ------ ---------
+vmms Running Automatic
+```
+
+##### 4. Check for Installation Issues
+
+If Hyper-V installation fails or cannot be enabled:
+
+**Check hardware virtualization support:**
+```powershell
+# Check if virtualization is enabled in BIOS/UEFI
+systeminfo | findstr /i "hyper-v"
+
+# Check processor virtualization capabilities
+Get-CimInstance -ClassName Win32_Processor | Select-Object Name, VirtualizationFirmwareEnabled, SecondLevelAddressTranslationExtensions
+```
+
+**Common issues:**
+- **Virtualization not enabled in BIOS**: Enable Intel VT-x or AMD-V in BIOS/UEFI settings
+- **Conflicting hypervisor**: Remove other virtualization products (VMware Workstation, VirtualBox, etc.)
+- **Running in a VM**: Nested virtualization must be enabled on the host hypervisor
+
+##### 5. Retry the Validation
+
+After installing Hyper-V and rebooting, re-run the Environment Validator.
+
+---
+
+## Additional Information
+
+### Why Hyper-V is Required for This Validator
+
+The infrastructure IP connectivity validator performs the following operations that require Hyper-V:
+
+1. **Creates a temporary virtual switch** (or uses an existing one)
+2. **Creates a virtual network adapter** (vNIC) for testing
+3. **Assigns infrastructure IPs** to the virtual adapter one at a time
+4. **Tests connectivity** from each IP to DNS servers and Azure endpoints
+5. **Cleans up** the test resources after validation
+
+This approach allows the validator to test connectivity from infrastructure IPs without permanently configuring them on physical adapters.
+
+### When This Validator Runs
+
+This validator only runs in scenarios where infrastructure IP connectivity needs to be tested:
+
+| Scenario | Runs? | Conditions |
+|----------|-------|------------|
+| **Deployment** | ✓ Yes | Only if ArcGateway is NOT enabled |
+| **Upgrade** | ✓ Yes | Only if ArcGateway is NOT enabled |
+
+**Note:** If ArcGateway is enabled, this validator is skipped because ArcGateway provides an alternative connectivity method that might not require infrastructure IP validation.
+
+### Hyper-V Requirements for Azure Local
+
+Hyper-V is a core requirement for Azure Local clusters:
+
+- **Required for cluster operations**: Hosts VMs and containerized workloads
+- **Required for Network ATC**: Creates virtual switches for network isolation
+- **Required for storage**: Storage Spaces Direct uses Hyper-V features
+- **Required for management**: Admin VMs and Arc Resource Bridge run on Hyper-V
+
+### Verifying Hyper-V Installation
+
+Complete verification of Hyper-V installation:
+
+```powershell
+# Check all Hyper-V related features
+Get-WindowsFeature -Name Hyper-V* | Where-Object { $_.InstallState -eq "Installed" } |
+ Select-Object DisplayName, Name, InstallState |
+ Format-Table -AutoSize
+
+# Check Hyper-V virtual switch capabilities
+Get-VMHost | Select-Object VirtualHardDiskPath, VirtualMachinePath, EnableEnhancedSessionMode
+
+# Verify network virtualization capabilities
+Get-VMSystemSwitchExtension | Select-Object Name, Vendor, Enabled
+```
+
+### Troubleshooting Hyper-V Installation Issues
+
+#### Issue: Installation Fails with Error
+
+**Solution 1 - Check for conflicting software:**
+```powershell
+# For example, check for other hypervisors
+Get-WmiObject Win32_Product | Where-Object { $_.Name -like "*VMware*" -or $_.Name -like "*VirtualBox*" }
+
+# Uninstall conflicting software before installing Hyper-V
+```
+
+**Solution 2 - Verify system requirements:**
+```powershell
+# Check if system meets minimum requirements
+# - 64-bit processor with SLAT (Second Level Address Translation)
+# - VM Monitor Mode Extension (VT-c on Intel or AMD-V on AMD)
+# - Minimum 4 GB RAM (8+ GB recommended)
+# - BIOS-level hardware virtualization support enabled
+
+Get-ComputerInfo | Select-Object CsProcessors, OsTotalVisibleMemorySize, HyperVisorPresent, HyperVRequirementVirtualizationFirmwareEnabled
+```
+
+#### Issue: Hyper-V Cmdlets Not Available
+
+**Solution:**
+```powershell
+# Install Hyper-V PowerShell module separately
+Install-WindowsFeature -Name Hyper-V-PowerShell
+
+# Import module manually
+Import-Module Hyper-V
+
+# Verify module is loaded
+Get-Module Hyper-V
+```
+
+#### Issue: Hyper-V Services Not Starting
+
+**Solution:**
+```powershell
+# Check service dependencies
+Get-Service -Name vmms -DependentServices
+Get-Service -Name vmms | Select-Object -ExpandProperty ServicesDependedOn
+
+# Start services manually
+Start-Service -Name vmms
+Start-Service -Name vmcompute
+
+# Check Windows Event Logs for errors
+Get-WinEvent -LogName "Microsoft-Windows-Hyper-V-*" -MaxEvents 20 |
+ Where-Object { $_.LevelDisplayName -eq "Error" } |
+ Select-Object TimeCreated, Message |
+ Format-List
+```
+
+### Installing Hyper-V on Server Core
+
+If running Server Core:
+
+```powershell
+# Install Hyper-V on Server Core
+Install-WindowsFeature -Name Hyper-V -IncludeManagementTools -Restart
+
+# Verify installation
+Get-WindowsFeature -Name Hyper-V*
+
+# If management tools are needed
+Install-WindowsFeature -Name RSAT-Hyper-V-Tools
+```
+
+### Related Validators
+
+Other infrastructure IP connection validators that run after this validator passes:
+- **AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_VMSwitch_Readiness** - Validates virtual switch
+- **AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_MANAGEMENT_VNIC_Readiness** - Validates management vNIC
+- **AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_IPReadiness** - Tests IP configuration
+- **AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_DNS_Server_Port_53** - Tests DNS connectivity
+
+### Related Documentation
+
+- [Install Hyper-V on Windows Server](https://learn.microsoft.com/windows-server/virtualization/hyper-v/get-started/install-the-hyper-v-role-on-windows-server)
+- [Hyper-V on Windows Server](https://learn.microsoft.com/windows-server/virtualization/hyper-v/hyper-v-on-windows-server)
+- [System requirements for Hyper-V](https://learn.microsoft.com/windows-server/virtualization/hyper-v/system-requirements-for-hyper-v-on-windows)
+- [Azure Local host network requirements](https://learn.microsoft.com/azure-stack/hci/concepts/host-network-requirements)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-InfraIP-IPReadiness.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-InfraIP-IPReadiness.md
new file mode 100644
index 0000000..fc80ab6
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-InfraIP-IPReadiness.md
@@ -0,0 +1,291 @@
+# AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_IPReadiness
+
+
+
+ | Name |
+ AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_IPReadiness |
+
+
+ | Severity |
+ Critical: This validator will block operations until remediated. |
+
+
+ | Applicable Scenarios |
+ Deployment (without ArcGateway), Upgrade (without ArcGateway) |
+
+
+
+## Overview
+
+This validator tests that infrastructure pool IPs can be assigned to a network adapter and can reach the default gateway. For each IP tested (up to first 9 from the pool), the validator assigns it to a temporary vNIC and validates ICMP connectivity to the gateway.
+
+## Requirements
+
+1. Infrastructure IPs must be available and not in use by other devices
+2. IPs must be routable to the default gateway
+3. Default gateway must be accessible via ICMP (ping)
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field.
+
+```json
+{
+ "Name": "AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_IPReadiness",
+ "DisplayName": "Test IP readiness on test adapter for IP from infra pool",
+ "Title": "Test IP readiness on test adapter for IP from infra pool",
+ "Status": 1,
+ "Severity": 2,
+ "Description": "Test IP readiness on test adapter for IP from infra pool",
+ "Remediation": "Make sure infra IP 10.0.0.100 is routable to your gateway 10.0.0.1, and the IP is not used by any other device or service on the network.",
+ "TargetResourceID": "Infra_IP_Connection_InfraIP_Readiness_10.0.0.100",
+ "TargetResourceName": "Infra_IP_Connection_InfraIP_Readiness_10.0.0.100",
+ "TargetResourceType": "Infra_IP_Connection_InfraIP_Readiness_10.0.0.100",
+ "Timestamp": "",
+ "AdditionalData": {
+ "Source": "SERVER01",
+ "Resource": "10.0.0.100",
+ "Detail": "[FAILED] Connection from 10.0.0.100 to gateway 10.0.0.1 failed. Cannot get the IP configured correctly on the test adapter.",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: Infrastructure IP Not Ready or Cannot Reach Gateway
+
+**Error Message:**
+```text
+[FAILED] Connection from 10.0.0.100 to gateway 10.0.0.1 failed. Cannot get the IP configured correctly on the test adapter.
+```
+
+**Root Cause:** The infrastructure IP cannot be configured on the test adapter, or ICMP connectivity to the gateway fails. Possible causes:
+- IP is already in use by another device
+- IP is not routable to the gateway
+- Gateway is not responding to ICMP
+- Network configuration issues (VLAN, routing, firewall)
+- IP assignment timeout
+
+#### Remediation Steps
+
+##### 1. Verify IP is Not in Use
+
+Check if the infrastructure IP is already in use:
+
+```powershell
+# Test if IP is in use
+$ipToCheck = "10.0.0.100" # Replace with the failing IP
+
+# Ping test
+$pingResult = Test-Connection -ComputerName $ipToCheck -Count 4 -Quiet
+if ($pingResult) {
+ Write-Host "⚠ IP $ipToCheck is responding to ping - may be in use" -ForegroundColor Yellow
+} else {
+ Write-Host "✓ IP $ipToCheck is not responding to ping" -ForegroundColor Green
+}
+
+# ARP check
+arp -a | Select-String $ipToCheck
+```
+
+If IP is in use:
+- **Option 1**: Remove the device using that IP
+- **Option 2**: Choose a different infrastructure IP pool range that doesn't conflict
+
+##### 2. Verify Gateway Connectivity
+
+Test connectivity to the default gateway:
+
+```powershell
+# Get gateway from infrastructure IP pool configuration
+$gateway = "10.0.0.1" # Replace with your actual gateway
+
+# Test ping to gateway
+Test-Connection -ComputerName $gateway -Count 4
+
+# Check routing table
+Get-NetRoute -DestinationPrefix "0.0.0.0/0" | Format-Table DestinationPrefix, NextHop, InterfaceAlias -AutoSize
+```
+
+If gateway is not reachable:
+- Verify gateway IP is correct
+- Check physical network connectivity
+- Verify VLAN configuration (if using VLANs)
+- Check switch/router configuration
+
+##### 3. Check VLAN Configuration
+
+If using VLANs, verify configuration:
+
+```powershell
+# Check if management adapters have VLAN configuration
+Get-NetAdapterAdvancedProperty -Name "MyAdapter" -RegistryKeyword "VlanID" -ErrorAction SilentlyContinue |
+ Format-Table Name, DisplayValue -AutoSize
+
+# For infrastructure IP testing, VLAN should match management VLAN
+# Check your deployment configuration for the correct VLAN ID
+```
+
+##### 4. Verify Physical Network Infrastructure
+
+Check switch and network infrastructure:
+- **Spanning Tree**: Ensure STP is not blocking ports (30 second convergence time)
+- **Port Security**: Verify switch ports allow MAC address changes
+- **VLAN assignment**: Confirm correct VLAN is configured on switch ports
+- **Gateway/Router**: Verify router/gateway is operational and configured correctly
+
+##### 5. Test with Manual IP Assignment
+
+Manually test IP assignment on management adapter:
+
+```powershell
+# Find management adapter
+$mgmtAdapter = Get-NetAdapter | Where-Object { $_.Name -like "testadapter" } | Select-Object -First 1
+
+if ($mgmtAdapter) {
+ Write-Host "Testing manual IP assignment on: $($mgmtAdapter.Name)" -ForegroundColor Cyan
+
+ # Save current IP configuration
+ $currentIP = Get-NetIPConfiguration -InterfaceAlias $mgmtAdapter.Name
+
+ # Test assigning an infrastructure IP
+ $testIP = "10.0.0.100" # Replace with failing IP
+ $gateway = "10.0.0.1" # Replace with your gateway
+ $prefixLength = 24
+
+ try {
+ # Temporarily assign the IP
+ New-NetIPAddress -InterfaceAlias $mgmtAdapter.Name `
+ -IPAddress $testIP `
+ -PrefixLength $prefixLength `
+ -SkipAsSource $true `
+ -ErrorAction Stop
+
+ Write-Host "✓ Successfully assigned IP" -ForegroundColor Green
+
+ # Wait for IP to stabilize
+ Start-Sleep -Seconds 5
+
+ # Test ping to gateway
+ ping $gateway -S $testIP
+
+ # Validate the ping result here
+
+ # Clean up - remove test IP
+ Remove-NetIPAddress -InterfaceAlias $mgmtAdapter.Name -IPAddress $testIP -Confirm:$false
+
+ } catch {
+ Write-Host "✗ Failed to assign IP: $($_.Exception.Message)" -ForegroundColor Red
+ }
+}
+```
+
+##### 6. Check Firewall Rules
+
+Ensure ICMP is allowed:
+
+```powershell
+# Check ICMP firewall rules
+Get-NetFirewallRule -DisplayName "*ICMP*" | Where-Object { $_.Enabled -eq $true } |
+ Select-Object DisplayName, Direction, Action | Format-Table -AutoSize
+
+# Enable ICMP if needed
+Enable-NetFirewallRule -DisplayName "File and Printer Sharing (Echo Request - ICMPv4-In)"
+```
+
+##### 8. Retry the Validation
+
+After fixing network issues, re-run the Environment Validator.
+
+---
+
+## Additional Information
+
+### How IP Readiness Testing Works
+
+The validator performs these steps for each infrastructure IP (up to 9 IPs):
+
+1. **Assign IP** to temporary test vNIC
+2. **Wait for IP to become "Preferred" state** (up to 180 seconds default)
+3. **Ping default gateway** from the IP (15 retries)
+4. **If successful**: IP is ready, proceed with connectivity tests
+5. **If failed**: Report IP readiness failure, skip to next IP
+
+### Why Only First 9 IPs Are Tested
+
+The validator tests only the first 9 IPs from the infrastructure pool because:
+- **6 IPs** are currently required for Azure Local services
+- **3 additional IPs** are reserved for future use (e.g., SLB VMs)
+- Testing all IPs would take too long (each IP test takes 10-30 seconds)
+
+### IP States and Readiness
+
+Windows IP addresses go through states:
+- **Tentative**: IP is being validated (duplicate address detection)
+- **Preferred**: IP is ready and usable ✓
+- **Deprecated**: IP is being phased out
+- **Invalid**: IP configuration failed
+
+The validator waits for "Preferred" state before testing connectivity.
+
+### Common Causes of IP Readiness Failures
+
+| Issue | Symptoms | Solution |
+|-------|----------|----------|
+| **IP in use** | Ping shows IP responds | Find and remove conflicting device |
+| **Wrong subnet/gateway** | No route to gateway | Fix IP pool configuration |
+| **VLAN mismatch** | Gateway unreachable | Configure correct VLAN on switch |
+| **STP convergence** | Intermittent connectivity | Wait 30 seconds or disable STP PortFast |
+| **Firewall blocking ICMP** | Ping timeout | Allow ICMP in firewall rules |
+| **Gateway offline** | No gateway response | Check router/gateway status |
+
+### Timeout Values
+
+The validator uses these timeout values:
+- **IP configuration timeout**: 180 seconds (configurable via TimeoutWaitForIPInSeconds parameter)
+- **Ping retries**: 15 attempts with 1 second between attempts
+- **Total time per IP**: Up to 3-4 minutes if experiencing issues
+
+### Infrastructure IP Pool Design
+
+**Best practices:**
+- **Use dedicated subnet**: Don't overlap with existing DHCP ranges or static IPs
+- **Reserve IPs**: Exclude infrastructure range from DHCP scope
+- **Size appropriately**: Minimum 9-16 IPs, recommended 32+ for growth
+- **Document**: Keep record of IP pool allocation
+
+**Example good configuration:**
+```
+Infrastructure Pool: 10.0.10.100 - 10.0.10.150 (51 IPs)
+Subnet: 10.0.10.0/24
+Gateway: 10.0.10.1
+DNS: 10.0.10.10, 10.0.10.11
+
+Ensure:
+- No DHCP assignments in this range
+- No static IPs assigned in this range
+- Gateway is 10.0.10.1 and operational
+```
+
+### Prerequisites for This Validator
+
+Requires these validators to pass first:
+- Hyper-V Readiness
+- VMSwitch Readiness
+- Management vNIC Readiness
+- Test vNIC Readiness
+- DNS Client Readiness
+
+### Related Validators
+
+After this validator passes:
+- **AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_DNS_Server_Port_53** - Tests DNS connectivity
+
+### Related Documentation
+
+- [Network requirements](https://learn.microsoft.com/en-us/azure/azure-local/concepts/host-network-requirements)
+- [Firewall requirements](https://learn.microsoft.com/en-us/azure/azure-local/concepts/firewall-requirements)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-InfraIP-VMSwitch-Readiness.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-InfraIP-VMSwitch-Readiness.md
new file mode 100644
index 0000000..dd43b48
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-InfraIP-VMSwitch-Readiness.md
@@ -0,0 +1,320 @@
+# AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_VMSwitch_Readiness
+
+
+
+ | Name |
+ AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_VMSwitch_Readiness |
+
+
+ | Severity |
+ Critical: This validator will block operations until remediated. |
+
+
+ | Applicable Scenarios |
+ Deployment (without ArcGateway), Upgrade (without ArcGateway) |
+
+
+
+## Overview
+
+This validator checks that a suitable virtual switch exists or can be created for testing infrastructure IP pool connectivity. The validator needs a virtual switch with the same physical adapters as defined in the management intent to properly test network connectivity from infrastructure IPs.
+
+## Requirements
+
+1. Either an existing external virtual switch with matching management intent adapters, OR
+2. Ability to create a new external virtual switch using the management intent adapters
+3. The virtual switch must be properly configured and operational
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information about virtual switch readiness.
+
+```json
+{
+ "Name": "AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_VMSwitch_Readiness",
+ "DisplayName": "Test VMSwitch readiness for all IP in infra IP pool",
+ "Title": "Test VMSwitch readiness for all IP in infra IP pool",
+ "Status": 1,
+ "Severity": 2,
+ "Description": "Test VMSwitch readiness for all IP in infra IP pool",
+ "Remediation": "Make sure at least one VMSwitch pre-configured on the host SERVER01 has the same set of adapters defined in management intent.",
+ "TargetResourceID": "Infra_IP_Connection_VMSwitchReadiness",
+ "TargetResourceName": "Infra_IP_Connection_VMSwitchReadiness",
+ "TargetResourceType": "Infra_IP_Connection_VMSwitchReadiness",
+ "Timestamp": "",
+ "AdditionalData": {
+ "Source": "SERVER01",
+ "Resource": "VMSwitchReadiness",
+ "Detail": "[FAILED] Cannot test connection for infra IP with wrong VMSwitch configured on host SERVER01.",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: No Suitable VMSwitch Found
+
+**Error Message:**
+```text
+[FAILED] Cannot test connection for infra IP with wrong VMSwitch configured on host SERVER01.
+```
+
+**Root Cause:** Either:
+- No external virtual switch exists on the host
+- Existing virtual switches don't use the same physical adapters as defined in the management intent
+- The validator cannot create a suitable virtual switch
+
+#### Remediation Steps
+
+##### 1. Check Current Virtual Switch Configuration
+
+Verify existing virtual switches:
+
+```powershell
+# List all virtual switches
+Get-VMSwitch | Format-Table Name, SwitchType, NetAdapterInterfaceDescription, EmbeddedTeamingEnabled -AutoSize
+
+# Check which physical adapters are used by each switch
+Get-VMSwitch -SwitchType External | ForEach-Object {
+ $switch = $_
+ $team = Get-VMSwitchTeam -Name $switch.Name -ErrorAction SilentlyContinue
+ if ($team) {
+ Write-Host "`nSwitch: $($switch.Name)" -ForegroundColor Cyan
+ Write-Host " Team Members:" -ForegroundColor White
+ $team.NetAdapterInterfaceDescription | ForEach-Object { Write-Host " $_" }
+ } else {
+ Write-Host "`nSwitch: $($switch.Name)" -ForegroundColor Cyan
+ Write-Host " Single Adapter: $($switch.NetAdapterInterfaceDescription)" -ForegroundColor White
+ }
+}
+```
+
+##### 2. Check Management Intent Configuration
+
+Identify which adapters should be used according to your deployment configuration:
+
+```powershell
+# View physical adapters
+Get-NetAdapter -Physical | Select-Object Name, Status, LinkSpeed, InterfaceDescription | Format-Table -AutoSize
+
+# Your management intent should specify which adapters to use
+# Example: Management intent uses "Ethernet" and "Ethernet 2"
+```
+##### 3. if you have an existing VMSwitch, but uses wrong adapters
+You might need to modify/re-create the VMSwitch so it use the same adapters as defined in the management intent.
+
+> **Warning:** Modifying the virtual switch will temporarily disconnect network connectivity.
+
+```powershell
+# Remove existing switch (if safe to do so)
+$oldSwitch = Get-VMSwitch -Name "ExistingSwitch"
+Remove-VMSwitch -Name $oldSwitch.Name -Force
+
+# Create new switch with correct adapters
+$adapterNames = @("Ethernet", "Ethernet 2")
+New-VMSwitch -Name "ConvergedSwitch(ManagementIntent)" `
+ -NetAdapterName $adapterNames `
+ -EnableEmbeddedTeaming $true `
+ -AllowManagementOS $true
+```
+
+##### 4. Verify Virtual Switch is Operational
+
+After creating or modifying the virtual switch:
+
+```powershell
+# Check switch status
+Get-VMSwitch | Select-Object Name, SwitchType, NetAdapterInterfaceDescription | Format-List
+
+# Check that management OS has access
+Get-VMNetworkAdapter -ManagementOS | Select-Object Name, SwitchName, Status | Format-Table -AutoSize
+
+# Verify physical adapters are bound to the switch
+Get-VMSwitchTeam -Name "ConvergedSwitch(ManagementIntent)" | Select-Object Name, TeamingMode, LoadBalancingAlgorithm
+
+# Test basic connectivity
+Test-NetConnection -ComputerName 8.8.8.8 -InformationLevel Detailed
+```
+
+##### 5. Retry the Validation
+
+After creating or fixing the virtual switch, re-run the Environment Validator.
+
+---
+
+## Additional Information
+
+### Virtual Switch Requirements for Infrastructure IP Testing
+
+The validator requires a virtual switch that:
+
+1. **Uses management intent adapters**: Must use the same physical adapters specified in the deployment's management intent
+2. **Is External type**: Must be an external virtual switch (not Internal or Private)
+3. **Allows Management OS**: Must have `-AllowManagementOS $true` to create management vNICs
+4. **Is operational**: Must be in a healthy state
+
+### Virtual Switch Naming Convention
+
+The validator uses Network ATC naming standards:
+
+- **Switch name pattern**: `ConvergedSwitch(IntentName)` or similar
+- **Management vNIC pattern**: `vManagement(IntentName)`
+
+Example for intent named "ManagementIntent":
+- Switch: `ConvergedSwitch(ManagementIntent)`
+- vNIC: `vManagement(ManagementIntent)`
+
+### Existing vs. New Virtual Switch
+
+The validator behaves differently based on existing virtual switch configuration:
+
+| Scenario | Validator Behavior |
+|----------|-------------------|
+| **No external switches exist** | Creates a new temporary switch for testing |
+| **1 external switch with correct adapters** | Uses the existing switch |
+| **1 external switch with wrong adapters** | **FAILS** - cannot proceed |
+| **Multiple external switches** | Searches for one with matching adapters |
+
+### Switch Embedded Teaming (SET)
+
+For multi-adapter management intents, the validator expects Switch Embedded Teaming:
+
+```powershell
+# SET switch combines multiple NICs into a team
+# Benefits:
+# - Load balancing across adapters
+# - Fault tolerance if one adapter fails
+# - RDMA support (if NICs support it)
+
+# Verify SET is enabled
+Get-VMSwitch | Where-Object { $_.EmbeddedTeamingEnabled -eq $true }
+
+# Check team configuration
+Get-VMSwitchTeam | Select-Object Name, TeamingMode, LoadBalancingAlgorithm, NetAdapterInterfaceDescription
+```
+
+### Common Virtual Switch Issues
+
+#### Issue: Switch creation fails
+
+**Solution 1 - Check adapter status:**
+```powershell
+# Ensure adapters are Up and not already bound
+Get-NetAdapter -Physical | Select-Object Name, Status, DriverDescription
+
+# Check if adapters are already bound to another switch
+Get-VMSwitch | ForEach-Object {
+ $_ | Select-Object Name, @{N='Adapters';E={($_ | Get-VMSwitchTeam).NetAdapterInterfaceDescription -join ', '}}
+}
+```
+
+**Solution 2 - Remove existing bindings:**
+If adapter is bound to old switch, remove it
+
+#### Issue: Management OS loses connectivity after switch creation
+Make sure the first adapter used in your management intent list is having connection before you configure the VMSwitch.
+
+#### Issue: Wrong adapters in the switch
+
+**Solution - Rebuild switch with correct adapters:**
+```powershell
+# Document current management IP configuration
+$mgmtIP = Get-NetIPConfiguration | Where-Object { $_.IPv4DefaultGateway -ne $null }
+$ipAddress = $mgmtIP.IPv4Address.IPAddress
+$prefixLength = $mgmtIP.IPv4Address.PrefixLength
+$gateway = $mgmtIP.IPv4DefaultGateway.NextHop
+$dnsServers = ($mgmtIP | Get-DnsClientServerAddress -AddressFamily IPv4).ServerAddresses
+
+# Remove old switch
+Remove-VMSwitch -Name "OldSwitch" -Force
+
+# Create new switch with correct adapters
+$correctAdapters = @("Ethernet", "Ethernet 2") # From management intent
+New-VMSwitch -Name "ConvergedSwitch(ManagementIntent)" `
+ -NetAdapterName $correctAdapters `
+ -EnableEmbeddedTeaming $true `
+ -AllowManagementOS $true
+
+# Reconfigure management IP
+Start-Sleep -Seconds 5
+$newVNIC = Get-NetAdapter | Where-Object { $_.Name -like "vEthernet*" -and $_.Status -eq "Up" } | Select-Object -First 1
+New-NetIPAddress -InterfaceAlias $newVNIC.Name `
+ -IPAddress $ipAddress `
+ -PrefixLength $prefixLength `
+ -DefaultGateway $gateway
+Set-DnsClientServerAddress -InterfaceAlias $newVNIC.Name -ServerAddresses $dnsServers
+```
+
+### Temporary vs. Permanent Virtual Switch
+
+**Temporary switch (created by validator):**
+- Created only if no suitable switch exists
+- Used only for validation testing
+- Cleaned up after validation completes
+- Named according to Network ATC standards
+
+**Permanent switch (pre-existing):**
+- Already exists from previous Network ATC deployment or manual creation
+- Used by validator if it matches requirements
+- Not modified or removed by validator
+
+### Network ATC and Virtual Switches
+
+During actual deployment, Network ATC will create virtual switches based on intents:
+
+```powershell
+# After deployment, Network ATC creates switches
+# Example for management intent:
+Add-NetIntent -Name "ManagementIntent" `
+ -Management `
+ -AdapterName "Ethernet", "Ethernet 2"
+
+# This creates:
+# - External virtual switch with SET
+# - vManagement virtual adapter
+# - Proper IP configuration
+```
+
+The validator's temporary switch mimics this structure for testing purposes.
+
+### Verification Commands
+
+Complete verification of virtual switch setup:
+
+```powershell
+# 1. Check switch exists and is external
+Get-VMSwitch -SwitchType External | Format-Table Name, SwitchType, EmbeddedTeamingEnabled -AutoSize
+
+# 2. Check adapters in the switch match management intent
+$switch = Get-VMSwitch -Name "ConvergedSwitch(ManagementIntent)"
+Get-VMSwitchTeam -Name $switch.Name | Select-Object -ExpandProperty NetAdapterInterfaceDescription
+
+# 3. Check management vNIC exists
+Get-VMNetworkAdapter -ManagementOS | Where-Object { $_.SwitchName -eq $switch.Name }
+
+# 4. Check physical adapters are Up
+Get-NetAdapter | Where-Object { $_.InterfaceDescription -in (Get-VMSwitchTeam -Name $switch.Name).NetAdapterInterfaceDescription }
+
+# 5. Test connectivity
+Test-NetConnection -ComputerName 8.8.8.8
+```
+
+### Related Validators
+
+Prerequisites for this validator:
+- **AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_Hyper_V_Readiness** - Must pass first
+
+Validators that run after this validator:
+- **AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_MANAGEMENT_VNIC_Readiness** - Validates management vNIC
+- **AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_vNIC_Readiness** - Validates test vNIC creation
+- **AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_IPReadiness** - Tests IP configuration
+- **AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_DNS_Server_Port_53** - Tests DNS connectivity
+
+### Related Documentation
+
+- [Create a virtual switch for Hyper-V](https://learn.microsoft.com/windows-server/virtualization/hyper-v/get-started/create-a-virtual-switch-for-hyper-v-virtual-machines)
+- [Azure Local host network requirements](https://learn.microsoft.com/en-us/azure/azure-local/concepts/host-network-requirements)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-InfraIP-vNIC-Readiness.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-InfraIP-vNIC-Readiness.md
new file mode 100644
index 0000000..1be3196
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-InfraIP-vNIC-Readiness.md
@@ -0,0 +1,273 @@
+# AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_vNIC_Readiness
+
+
+
+ | Name |
+ AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_vNIC_Readiness |
+
+
+ | Severity |
+ Critical: This validator will block operations until remediated. |
+
+
+ | Applicable Scenarios |
+ Deployment (without ArcGateway), Upgrade (without ArcGateway) |
+
+
+
+## Overview
+
+This validator checks that a temporary test virtual network adapter can be created successfully on the virtual switch. This test vNIC is used to assign infrastructure IPs and validate connectivity to DNS servers and Azure endpoints.
+
+## Requirements
+
+1. Ability to create a new virtual network adapter using `Add-VMNetworkAdapter`
+2. The test vNIC must be operational and accessible
+3. `Get-VMNetworkAdapter` cmdlet must work correctly
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information.
+
+```json
+{
+ "Name": "AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_vNIC_Readiness",
+ "DisplayName": "Test virtual adapter readiness for all IP in infra IP pool",
+ "Title": "Test virtual adapter readiness for all IP in infra IP pool",
+ "Status": 1,
+ "Severity": 2,
+ "Description": "Test virtual adapter readiness for all IP in infra IP pool",
+ "Remediation": "Make sure Add/Get-VMNetworkAdapter on SERVER01 can run correctly.",
+ "TargetResourceID": "Infra_IP_Connection_VNICReadiness",
+ "TargetResourceName": "Infra_IP_Connection_VNICReadiness",
+ "TargetResourceType": "Infra_IP_Connection_VNICReadiness",
+ "Timestamp": "",
+ "AdditionalData": {
+ "Source": "SERVER01",
+ "Resource": "VNICReadiness",
+ "Detail": "[FAILED] Cannot test connection for infra IP. VM network adapter is not configured correctly on host SERVER01.",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: Cannot Create Test vNIC
+
+**Error Message:**
+```text
+[FAILED] Cannot test connection for infra IP. VM network adapter is not configured correctly on host SERVER01.
+```
+
+**Root Cause:** The validator cannot create a temporary test vNIC, which could be caused by:
+- Hyper-V cmdlets not working properly
+- Virtual switch issues
+- Insufficient permissions
+- System resource constraints
+
+#### Remediation Steps
+
+##### 1. Verify Hyper-V Cmdlets Are Working
+
+Test basic Hyper-V virtual adapter operations:
+
+```powershell
+# Test Get-VMNetworkAdapter cmdlet
+Get-VMNetworkAdapter -ManagementOS | Format-Table Name, SwitchName, Status -AutoSize
+
+# Test if we can access virtual switch
+Get-VMSwitch | Format-Table Name, SwitchType -AutoSize
+```
+
+If these cmdlets fail, Hyper-V may not be properly installed or the service may be stopped.
+
+##### 2. Check Hyper-V Services
+
+Ensure Hyper-V services are running:
+
+```powershell
+# Check Hyper-V services
+Get-Service -Name vmms | Select-Object Name, Status, StartType | Format-Table -AutoSize
+
+# Start services if stopped
+if ((Get-Service vmms).Status -ne 'Running') {
+ Start-Service vmms
+}
+
+# Verify services are running
+Get-Service -Name vmms
+```
+
+##### 3. Test Creating a Temporary vNIC
+
+Try manually creating a test vNIC to diagnose the issue:
+
+```powershell
+# Get the virtual switch
+$vmSwitch = Get-VMSwitch -SwitchType External | Select-Object -First 1
+
+if (-not $vmSwitch) {
+ Write-Error "No external virtual switch found"
+ exit
+}
+
+# Try creating a test vNIC
+$testVNICName = "TestVNIC_Temp"
+try {
+ Add-VMNetworkAdapter -ManagementOS -Name $testVNICName -SwitchName $vmSwitch.Name
+ Write-Host "✓ Successfully created test vNIC" -ForegroundColor Green
+
+ # Verify it exists
+ $testVNIC = Get-VMNetworkAdapter -ManagementOS -Name $testVNICName
+ if ($testVNIC) {
+ Write-Host "✓ Test vNIC is accessible" -ForegroundColor Green
+ Write-Host " Name: $($testVNIC.Name)" -ForegroundColor White
+ Write-Host " Switch: $($testVNIC.SwitchName)" -ForegroundColor White
+ Write-Host " Status: $($testVNIC.Status)" -ForegroundColor White
+ }
+
+ # Clean up
+ Remove-VMNetworkAdapter -ManagementOS -Name $testVNICName -Confirm:$false
+ Write-Host "✓ Successfully removed test vNIC" -ForegroundColor Green
+
+} catch {
+ Write-Host "✗ Failed to create test vNIC" -ForegroundColor Red
+ Write-Host " Error: $($_.Exception.Message)" -ForegroundColor Red
+}
+```
+
+##### 4. Check for Resource Constraints
+
+Verify system has sufficient resources:
+
+```powershell
+# Check available memory
+$os = Get-CimInstance Win32_OperatingSystem
+$freeMemoryGB = [math]::Round($os.FreePhysicalMemory / 1MB, 2)
+Write-Host "Free Memory: $freeMemoryGB GB"
+
+# Check existing vNIC count
+$vnicCount = (Get-VMNetworkAdapter -ManagementOS).Count
+Write-Host "Existing Management vNICs: $vnicCount"
+
+# Check virtual switch health
+Get-VMSwitch | ForEach-Object {
+ Write-Host "`nSwitch: $($_.Name)"
+ Write-Host " Type: $($_.SwitchType)"
+ Write-Host " Embedded Teaming: $($_.EmbeddedTeamingEnabled)"
+
+ $adapters = Get-VMNetworkAdapter -ManagementOS | Where-Object { $_.SwitchName -eq $_.Name }
+ Write-Host " Connected vNICs: $($adapters.Count)"
+}
+```
+
+##### 5. Check Permissions
+
+Ensure you're running with administrator privileges:
+
+```powershell
+# Check if running as administrator
+$isAdmin = ([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole]::Administrator)
+
+if ($isAdmin) {
+ Write-Host "✓ Running with administrator privileges" -ForegroundColor Green
+} else {
+ Write-Host "✗ NOT running with administrator privileges" -ForegroundColor Red
+ Write-Host " The Environment Validator must run with administrator rights" -ForegroundColor Yellow
+}
+```
+
+##### 6. Restart Hyper-V Services
+
+If cmdlets are not working, try restarting Hyper-V services:
+
+```powershell
+# Restart Hyper-V Virtual Machine Management service
+Restart-Service vmms -Force
+
+# Wait a moment
+Start-Sleep -Seconds 5
+
+# Verify services are running
+Get-Service vmms, vmcompute | Format-Table Name, Status -AutoSize
+
+# Test vNIC operations again
+Get-VMNetworkAdapter -ManagementOS | Select-Object Name, SwitchName
+```
+
+##### 7. Check Event Logs
+
+Review Hyper-V event logs for errors:
+
+```powershell
+# Check recent Hyper-V errors
+Get-WinEvent -LogName "Microsoft-Windows-Hyper-V-*" -MaxEvents 20 -ErrorAction SilentlyContinue |
+ Where-Object { $_.LevelDisplayName -eq "Error" } |
+ Select-Object TimeCreated, LogName, Message |
+ Format-List
+
+# Check specifically for VMMS errors
+Get-WinEvent -LogName "Microsoft-Windows-Hyper-V-VMMS-Admin" -MaxEvents 10 -ErrorAction SilentlyContinue |
+ Where-Object { $_.LevelDisplayName -eq "Error" } |
+ Format-List TimeCreated, Message
+```
+
+##### 8. Retry the Validation
+
+After fixing the issues, re-run the Environment Validator.
+
+---
+
+## Additional Information
+
+### Test vNIC Purpose
+
+The validator creates a temporary test vNIC to:
+
+1. Assign infrastructure IPs one at a time
+2. Test connectivity from each IP to DNS servers (port 53)
+3. Test connectivity from each IP to required Azure endpoints
+4. Collect connectivity validation results
+5. Clean up and remove the test vNIC after validation
+
+### Test vNIC Lifecycle
+
+```
+1. Create temporary vNIC → connected to virtual switch
+2. For each infrastructure IP (up to first 9 IPs):
+ a. Assign IP to test vNIC
+ b. Wait for IP to become ready
+ c. Test connectivity to DNS servers
+ d. Test connectivity to Azure endpoints
+ e. Remove IP from test vNIC
+3. Remove temporary vNIC (cleanup)
+```
+
+### How Many vNICs Can Exist?
+
+There's no hard limit on management vNICs, but:
+- Too many vNICs can consume system resources
+- Typical deployments have 1-3 management vNICs
+- Test vNIC is temporary and cleaned up after validation
+
+### Prerequisites for This Validator
+
+This validator requires:
+- **AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_Hyper_V_Readiness** - Hyper-V must be installed
+- **AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_VMSwitch_Readiness** - Virtual switch must exist
+- **AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_MANAGEMENT_VNIC_Readiness** - Management vNIC must exist
+
+### Related Validators
+
+Validators that run after this validator:
+- **AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_DNSClientServerAddress_Readiness** - Validates DNS configuration
+- **AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_IPReadiness** - Tests infrastructure IP assignment
+- **AzureLocal_NetworkInfraConnection_Test_Infra_IP_Connection_DNS_Server_Port_53** - Tests DNS connectivity
+
+### Related Documentation
+
+- [Hyper-V Network Virtualization](https://learn.microsoft.com/windows-server/networking/sdn/technologies/hyper-v-network-virtualization/hyper-v-network-virtualization)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-IntentVirtualAdapterExistence.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-IntentVirtualAdapterExistence.md
new file mode 100644
index 0000000..6cd937e
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-IntentVirtualAdapterExistence.md
@@ -0,0 +1,383 @@
+# AzureLocal_Network_Test_IntentVirtualAdapterExistence
+
+
+
+ | Name |
+ AzureLocal_Network_Test_IntentVirtualAdapterExistence |
+
+
+ | Severity |
+ Informational: This validator provides diagnostic information. |
+
+
+ | Applicable Scenarios |
+ Upgrade, Pre-Update |
+
+
+
+## Overview
+
+This validator checks that all expected virtual network adapters created by Network ATC intents exist and are in the "Up" state. These virtual adapters are critical for network traffic separation and proper cluster operation.
+
+## Requirements
+
+For each Network ATC intent configured:
+
+1. **Management Intent**: vManagement(IntentName) virtual adapter must exist and be Up
+2. **Converged Intent (Storage + Management/Compute)**: vSMB(IntentName#AdapterName) virtual adapters must exist and be Up for each storage adapter
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information about which virtual adapters are missing or not Up.
+
+```json
+{
+ "Name": "AzureLocal_Network_Test_IntentVirtualAdapterExistence",
+ "DisplayName": "Test intent virtual adapter readiness on server",
+ "Title": "Test intent virtual adapter readiness",
+ "Status": 1,
+ "Severity": 0,
+ "Description": "Check intent virtual adapter readiness on NODE1",
+ "Remediation": "https://aka.ms/azurelocal/envvalidator/IntentVirtualAdapterExistence",
+ "TargetResourceID": "IntentVirtualAdapterExistence",
+ "TargetResourceName": "IntentVirtualAdapterExistence",
+ "TargetResourceType": "IntentVirtualAdapterExistence",
+ "Timestamp": "",
+ "AdditionalData": {
+ "Source": "NODE1",
+ "Resource": "IntentVirtualAdapterExistence",
+ "Detail": "Virtual adapter status on NODE1\n ERROR: VMNetworkAdapter vManagement(ManagementIntent) does NOT exist.\n Pass: NetAdapter vSMB(StorageIntent#Ethernet 2) exists and is Up.",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: Virtual Network Adapter Does Not Exist
+
+**Error Message:**
+```text
+Virtual adapter status on NODE1
+ ERROR: VMNetworkAdapter vManagement(ManagementIntent) does NOT exist.
+ ERROR: VMNetworkAdapter vSMB(StorageIntent#Ethernet 2) does NOT exist.
+```
+
+**Root Cause:** The expected virtual network adapters were not created by Network ATC, or were deleted/removed. This typically indicates:
+- Network ATC intent was not applied successfully
+- Virtual switch was not created
+- Intent configuration is incomplete or failed
+
+#### Remediation Steps
+
+##### 1. Check Network ATC Intent Status
+
+Verify Network ATC intent configuration and status:
+
+```powershell
+# Check all Network ATC intents
+Get-NetIntent | Format-List Name, IntentType, IsComputeIntentSet, IsManagementIntentSet, IsStorageIntentSet, ProvisioningStatus
+
+# Check intent status
+Get-NetIntentStatus
+
+```
+
+**Expected output for healthy intent:**
+```
+Name : ManagementIntent
+IsManagementIntentSet : True
+ProvisioningStatus : Completed
+ConfigurationStatus : Success
+```
+
+##### 2. Check Virtual Switch Existence
+
+Verify that the virtual switch was created:
+
+```powershell
+# Check virtual switches
+Get-VMSwitch
+
+```
+
+**Expected:** At least one external virtual switch with EmbeddedTeamingEnabled = True (SET switch) and the virtual switch is using the adapters defined for the management intent.
+
+##### 3. Check Virtual Network Adapters
+
+List all virtual network adapters:
+
+```powershell
+# Check VMNetworkAdapters
+Get-VMNetworkAdapter -ManagementOS | Select-Object Name, SwitchName, Status | Format-Table -AutoSize
+
+# Check corresponding NetAdapters
+Get-NetAdapter | Where-Object { $_.Name -like "vManagement*" -or $_.Name -like "vSMB*" } |
+ Select-Object Name, Status, LinkSpeed, InterfaceDescription |
+ Format-Table -AutoSize
+```
+
+##### 4. Option A: Re-apply Network ATC Intent
+
+If the intent exists but virtual adapters are missing, try re-applying the intent:
+
+```powershell
+# Get existing intent configuration
+$intent = Get-NetIntent -Name "ManagementIntent" # Replace with your intent name
+
+# Remove and re-add the intent
+Remove-NetIntent -Name "ManagementIntent"
+
+# Wait for cleanup
+Start-Sleep -Seconds 30
+
+# Re-add intent (example - adjust parameters to match your configuration)
+Add-NetIntent -Name "ManagementIntent" `
+ -Management `
+ -AdapterName @("Ethernet", "Ethernet 1") ... # make sure you include other necessary parameters that align with your system requirement
+
+# Monitor provisioning status
+Get-NetIntentStatus -Name "ManagementIntent"
+
+```
+
+##### 5. Option B: Manually Create Missing Virtual Adapter (Temporary Workaround)
+
+> **Note:** This is not recommended for production. The proper solution is to fix the Network ATC intent. Use this only for emergency situations.
+
+```powershell
+# Get the virtual switch name
+$vmSwitch = Get-VMSwitch -SwitchType External -Name "SwitchName" # Replace with your VMSwitch name
+
+# Create missing vManagement adapter
+Add-VMNetworkAdapter -ManagementOS -Name "vManagement(ManagementIntent)" -SwitchName $vmSwitch.Name
+
+# You might need to rename the name of the NetAdapter object, as that is not using the above "vManagement(ManagementIntent)" name.
+Get-NetAdapter -Name "vEthernet (vManagement(ManagementIntent))" | Rename-NetAdapter -NewName "vManagement(ManagementIntent)"
+
+# Verify creation
+Get-VMNetworkAdapter -ManagementOS -Name "vManagement(ManagementIntent)"
+Get-NetAdapter -Name "vManagement(ManagementIntent)"
+```
+
+##### 6. Check for Intent Configuration Errors
+
+Review the intent configuration for errors:
+
+```powershell
+# Check detailed error messages
+Get-NetIntentStatus
+
+# Check event logs for Network ATC errors
+Get-WinEvent -LogName "Microsoft-Windows-Hyper-V-VmSwitch-Operational" -MaxEvents 50 |
+ Select-Object TimeCreated, Message |
+ Format-List
+```
+
+##### 7. Verify Physical Adapters
+
+Ensure the physical adapters referenced in the intent are present and Up:
+
+```powershell
+# Check physical adapters used in intents
+$intent = Get-NetIntent -Name "ManagementIntent"
+$adapterNames = $intent.NetAdapterNamesAsList
+
+foreach ($adapterName in $adapterNames) {
+ $adapter = Get-NetAdapter -Name $adapterName -ErrorAction SilentlyContinue
+ if ($adapter) {
+ Write-Host "Adapter $adapterName - Status: $($adapter.Status), Speed: $($adapter.LinkSpeed)" -ForegroundColor Green
+ } else {
+ Write-Host "ERROR: Adapter $adapterName not found!" -ForegroundColor Red
+ }
+}
+```
+
+---
+
+### Failure: Virtual Network Adapter Exists But Is Not Up
+
+**Error Message:**
+```text
+Virtual adapter status on NODE1
+ Pass: VMNetworkAdapter vManagement(ManagementIntent) exists.
+ ERROR: NetAdapter vManagement(ManagementIntent) does NOT exist or is not Up.
+```
+
+**Root Cause:** The virtual adapter exists but is in "Disabled" or "Disconnected" state.
+
+#### Remediation Steps
+
+##### 1. Check Adapter Status
+
+```powershell
+# Check the adapter status
+Get-NetAdapter -Name "vManagement(ManagementIntent)" | Format-List Name, Status, MediaConnectionState
+
+# Check VM network adapter status
+Get-VMNetworkAdapter -ManagementOS -Name "vManagement(ManagementIntent)" | Format-List Name, Status, Connected
+```
+
+##### 2. Enable the Adapter
+
+If the adapter is disabled:
+
+```powershell
+# Enable NetAdapter
+Enable-NetAdapter -Name "vManagement(ManagementIntent)" -Confirm:$false
+
+# Verify status
+Get-NetAdapter -Name "vManagement(ManagementIntent)" | Select-Object Name, Status
+```
+
+##### 3. Check Virtual Switch Connection
+
+Ensure the adapter is connected to the virtual switch:
+
+```powershell
+# Check VMNetworkAdapter connection
+Get-VMNetworkAdapter -ManagementOS -Name "vManagement(ManagementIntent)"
+
+# Check virtual switch status
+Get-VMSwitch -Name $vmAdapter.SwitchName
+```
+
+##### 4. Check Physical Adapter Status
+
+If virtual adapter is down, check underlying physical adapters:
+
+```powershell
+# Get virtual switch
+$vmSwitch = Get-VMSwitch -SwitchType External | Where-Object { $_.Name -eq "ConvergedSwitch(ManagementIntent)" }
+
+# Check physical adapters in the SET team
+Get-VMSwitchTeam -Name $vmSwitch.Name
+
+# Check team members
+Get-VMSwitchTeam -Name $vmSwitch.Name | Select-Object -ExpandProperty NetAdapterInterfaceDescription | ForEach-Object {
+ $physAdapter = Get-NetAdapter -InterfaceDescription $_
+ [PSCustomObject]@{
+ Name = $physAdapter.Name
+ Status = $physAdapter.Status
+ LinkSpeed = $physAdapter.LinkSpeed
+ }
+} | Format-Table -AutoSize
+```
+
+If physical adapters are down, enable them:
+
+```powershell
+Enable-NetAdapter -Name "Ethernet" -Confirm:$false
+Enable-NetAdapter -Name "Ethernet 1" -Confirm:$false
+```
+
+---
+
+## Additional Information
+
+### Understanding Virtual Adapter Types
+
+Network ATC creates different virtual adapters based on intent type:
+
+| Intent Configuration | Virtual Adapter Created | Purpose |
+|---------------------|------------------------|---------|
+| Management only | vManagement(IntentName) | Management traffic |
+| Storage only | None (uses physical adapters) | Storage traffic on physical adapters |
+| Management + Storage (converged) | vManagement(IntentName) + vSMB(IntentName#Adapter) per storage adapter | Separated management and storage on same physical adapters |
+| Compute + Storage (converged) | vSMB(IntentName#Adapter) per storage adapter | Separated compute and storage |
+
+### Example: Management Intent Virtual Adapters
+
+**Intent configuration:**
+```json
+{
+ "name": "ManagementIntent",
+ "trafficType": ["Management"],
+ "adapter": ["Ethernet", "Ethernet 1"]
+}
+```
+
+**Expected virtual adapters:**
+- vManagement(ManagementIntent) - Single virtual adapter for management traffic
+
+### Example: Converged Intent Virtual Adapters
+
+**Intent configuration:**
+```json
+{
+ "name": "ConvergedIntent",
+ "trafficType": ["Management", "Compute", "Storage"],
+ "adapter": ["Ethernet 2", "Ethernet 3", "Ethernet 4", "Ethernet 5"]
+}
+```
+
+**Expected virtual adapters:**
+- vManagement(ConvergedIntent) - For management traffic
+- vSMB(ConvergedIntent#Ethernet 2) - For storage traffic on Ethernet 2
+- vSMB(ConvergedIntent#Ethernet 3) - For storage traffic on Ethernet 3
+- vSMB(ConvergedIntent#Ethernet 4) - For storage traffic on Ethernet 4
+- vSMB(ConvergedIntent#Ethernet 5) - For storage traffic on Ethernet 5
+
+### Checking Expected Virtual Adapters
+
+To determine what virtual adapters should exist based on your intents:
+
+```powershell
+# Function to list expected virtual adapters
+$intents = Get-NetIntent
+
+foreach ($intent in $intents) {
+ Write-Host "`nIntent: $($intent.IntentName)" -ForegroundColor Cyan
+ Write-Host " Traffic Types: $($intent.IntentType)" -ForegroundColor White
+
+ $expectedAdapters = @()
+
+ # Check for management
+ if ($intent.IsManagementIntentSet) {
+ $expectedAdapters += "vManagement($($intent.IntentName))"
+ }
+
+ # Check for converged storage
+ if ($intent.IsStorageIntentSet -and ($intent.IsManagementIntentSet -or $intent.IsComputeIntentSet)) {
+ foreach ($adapter in $intent.NetAdapterNamesAsList) {
+ $expectedAdapters += "vSMB($($intent.IntentName)#$adapter)"
+ }
+ }
+
+ Write-Host " Expected VIRTUAL adapters:" -ForegroundColor Yellow
+ foreach ($adapter in $expectedAdapters) {
+ $exists = Get-NetAdapter -Name $adapter -ErrorAction SilentlyContinue
+ if ($exists -and $exists.Status -eq "Up") {
+ Write-Host " ✓ $adapter (Status: $($exists.Status))" -ForegroundColor Green
+ } elseif ($exists) {
+ Write-Host " ⚠ $adapter (Status: $($exists.Status))" -ForegroundColor Yellow
+ } else {
+ Write-Host " ✗ $adapter (MISSING)" -ForegroundColor Red
+ }
+ }
+}
+```
+
+### Network ATC Intent Lifecycle
+
+**Checking lifecycle status:**
+```powershell
+Get-NetIntentStatus | Select-Object IntentName, ProvisioningStatus, ConfigurationStatus, LastUpdated, LastSuccess | Format-List
+```
+
+### Common Issues and Solutions
+
+| Issue | Cause | Solution |
+|-------|-------|----------|
+| Virtual adapter doesn't exist | Intent not applied | Re-apply Network ATC intent |
+| Adapter exists but is Down | Physical adapter issue | Check and enable physical adapters |
+| VMNetworkAdapter exists, NetAdapter doesn't | Driver or binding issue | Restart Hyper-V Virtual Switch service |
+| All adapters missing | Virtual switch not created | Check physical adapter availability, re-create intent |
+
+### Related Documentation
+
+- [Manage Network ATC](https://learn.microsoft.com/azure-stack/hci/deploy/network-atc)
+- [Host network requirements](https://learn.microsoft.com/azure-stack/hci/concepts/host-network-requirements)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-MOCStackNetworkPort.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-MOCStackNetworkPort.md
index 931861f..4fbe36f 100644
--- a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-MOCStackNetworkPort.md
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-MOCStackNetworkPort.md
@@ -22,7 +22,7 @@ This environment validator fails when the node that runs the MOCStack checks doe
> **Notes**
>
> * This happens during the validation phase before the deployment begins.
-> * `internetbeacon.msedge.net` is used only by the connectivity probe behavior of `Test-NetConnection`. It is **not** listed among the required Azure Local service endpoints in Microsoft’s Firewall Requirements page for Azure Local: https://learn.microsoft.com/en-us/azure/azure-local/concepts/firewall-requirements?view=azloc-2508
+> * `internetbeacon.msedge.net` is used only by the connectivity probe behavior of `Test-NetConnection`. It is **not** listed among the required Azure Local service endpoints in Microsoft’s Firewall Requirements page for Azure Local: https://learn.microsoft.com/en-us/azure/azure-local/concepts/firewall-requirements
## Requirements
@@ -247,7 +247,7 @@ Then run the Environment Validator again. The status for `AzStackHci_MOCStack_Ne
* This validator checks generic internet reachability through `internetbeacon.msedge.net` using the default behavior of `Test-NetConnection`
* The requirement applies during validation before deployment
-* `internetbeacon.msedge.net` is not part of the documented Azure Local firewall allowlist for service endpoints: https://learn.microsoft.com/en-us/azure/azure-local/concepts/firewall-requirements?view=azloc-2508
+* `internetbeacon.msedge.net` is not part of the documented Azure Local firewall allowlist for service endpoints: https://learn.microsoft.com/en-us/azure/azure-local/concepts/firewall-requirements
* Document any temporary firewall changes and remove them after validation if they are not needed for ongoing operations
## Quick summary
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-MgmtIP-Connection.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-MgmtIP-Connection.md
new file mode 100644
index 0000000..786350a
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-MgmtIP-Connection.md
@@ -0,0 +1,323 @@
+# AzureLocal_Network_Test_NodeManagementIPConnection
+
+
+
+ | Name |
+ AzureLocal_Network_Test_NodeManagementIPConnection |
+
+
+ | Severity |
+ Informational: This validator provides diagnostic information. |
+
+
+ | Applicable Scenarios |
+ Deployment (Static IP only), Pre-Update (Static IP only) |
+
+
+
+## Overview
+
+This validator checks that each node's management IP address can be connected to via remote PowerShell session, and that the computer name obtained from that session matches the expected node name in the deployment configuration.
+
+## Requirements
+
+For static IP deployments:
+1. Each management IP defined in the deployment configuration must be reachable via WinRM
+2. The computer name from the remote session must match the expected node name in the configuration
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information about the management IP connectivity test.
+
+```json
+{
+ "Name": "AzureLocal_Network_Test_NodeManagementIPConnection",
+ "DisplayName": "Validate that management IP of host machine is able to be connected",
+ "Title": "Validate that management IP of host machine is able to be connected",
+ "Status": 1,
+ "Severity": 0,
+ "Description": "Management IP of each host machine defined in ECE config should be able to be connected correctly.",
+ "Remediation": "https://aka.ms/azurelocal/envvalidator/networkmgmtipconfiguration",
+ "TargetResourceID": "NODE1, NetworkHostManagementIP",
+ "TargetResourceName": "NODE1, NetworkHostManagementIP",
+ "TargetResourceType": "NetworkHostManagementIP",
+ "Timestamp": "",
+ "AdditionalData": {
+ "Source": "NODE1",
+ "Resource": "NetworkHostManagementIP",
+ "Detail": "Management IP on server NODE1 is NOT correct. Expected node name: NODE1, actual node name from IP: NODE2.",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: Cannot Connect to Management IP
+
+**Root Cause:** The management IP address defined for this node cannot be reached via Windows Remote Management (WinRM). Possible causes include:
+- IP address is incorrect or not assigned
+- Network connectivity issues
+- Firewall blocking WinRM traffic
+- WinRM service not running
+- IP configured on wrong adapter
+
+#### Remediation Steps
+
+##### 1. Verify IP Address is Assigned to the Node
+
+Connect to the node locally (console, KVM, or physical access) and verify the IP configuration:
+
+```powershell
+# Check all IP addresses assigned to network adapters
+Get-NetIPAddress -AddressFamily IPv4 |
+ Where-Object { $_.IPAddress -notlike "169.254.*" -and $_.IPAddress -ne "127.0.0.1" } |
+ Select-Object InterfaceAlias, IPAddress, PrefixLength |
+ Format-Table -AutoSize
+
+# Look for the expected management IP in the output
+```
+
+If the expected management IP is not found:
+- Verify the IP is assigned to the correct network adapter
+- Check that the adapter is connected and has "Up" status
+- Verify network cable is connected
+
+##### 2. Verify Network Connectivity
+
+From the machine running the Environment Validator, test connectivity to the management IP:
+
+```powershell
+# Test basic connectivity
+Test-NetConnection -ComputerName -Port 5985 -InformationLevel Detailed
+
+# Test ICMP ping
+Test-Connection -ComputerName -Count 4
+```
+
+If connectivity fails:
+- Check network switch configuration
+- Verify correct VLAN configuration (if VLANs are used)
+- Check that both machines are on the same network segment or have proper routing
+
+##### 3. Verify WinRM Service is Running
+
+On the target node, check WinRM service status:
+
+```powershell
+# Check WinRM service status
+Get-Service -Name WinRM | Format-List Name, Status, StartType
+
+# If not running, start it
+Start-Service -Name WinRM
+
+# Set to start automatically
+Set-Service -Name WinRM -StartupType Automatic
+```
+
+##### 4. Verify Firewall Rules
+
+On the target node, ensure Windows Firewall allows WinRM:
+
+```powershell
+# Check if WinRM firewall rules are enabled
+Get-NetFirewallRule -Name "WINRM-HTTP-In-TCP*" |
+ Select-Object Name, Enabled, Direction, Action |
+ Format-Table -AutoSize
+
+# Enable WinRM HTTP inbound rule if disabled
+Enable-NetFirewallRule -Name "WINRM-HTTP-In-TCP"
+```
+
+##### 5. Test WinRM Configuration
+
+On the target node:
+
+```powershell
+# Test WinRM configuration
+Test-WSMan -ComputerName localhost
+
+# Check WinRM listeners
+Get-WSManInstance -ResourceURI winrm/config/listener -Enumerate
+```
+
+From the validator machine:
+
+```powershell
+# Test remote WinRM access
+Test-WSMan -ComputerName -Authentication Default
+```
+
+##### 6. Verify Deployment Configuration
+
+Check your deployment configuration file to ensure:
+- The IP address is correct for this specific node
+- The IP matches what's assigned to the management adapter
+- There are no typos in the IP address
+
+---
+
+### Failure: Node Name Mismatch
+
+**Error Message:**
+```text
+Management IP on server NODE1 is NOT correct. Expected node name: NODE1, actual node name from IP: NODE2.
+```
+
+**Root Cause:** The validator successfully connected to the management IP, but the computer name returned from the remote session does not match the expected node name in the configuration. This indicates one of:
+- Wrong IP address assigned to wrong node
+- IP addresses swapped between nodes
+- Computer name not set correctly on the node
+
+#### Remediation Steps
+
+##### 1. Verify Computer Name on Target Node
+
+Connect to the node at the management IP and verify its computer name:
+
+```powershell
+# On the target node
+$env:COMPUTERNAME
+hostname
+```
+
+Compare this to the expected name in your deployment configuration.
+
+##### 2. If Computer Name is Wrong - Rename the Computer
+
+If the computer name doesn't match what's expected:
+
+```powershell
+# Rename the computer (requires reboot)
+Rename-Computer -NewName "ExpectedNodeName" -Restart -Force
+```
+
+**OR** update your deployment configuration to use the correct computer name.
+
+##### 3. If IP Assignment is Wrong - Correct IP Configuration
+
+If the computer name is correct but assigned the wrong IP:
+
+1. Verify which node should have which IP according to your deployment plan
+2. Correct the IP assignment on the affected nodes:
+
+```powershell
+# Example: Assign correct management IP
+# Find the management adapter
+$mgmtAdapter = Get-NetAdapter | Where-Object { $_.Name -eq "Ethernet" } # Use actual adapter name
+
+# Remove incorrect IP
+Remove-NetIPAddress -InterfaceAlias $mgmtAdapter.Name -Confirm:$false
+
+# Assign correct IP
+New-NetIPAddress -InterfaceAlias $mgmtAdapter.Name `
+ -IPAddress "192.168.1.10" ` # Correct IP for this node
+ -PrefixLength 24 `
+ -DefaultGateway "192.168.1.1"
+```
+
+3. Update DNS settings if needed:
+
+```powershell
+Set-DnsClientServerAddress -InterfaceAlias $mgmtAdapter.Name `
+ -ServerAddresses "192.168.1.100", "192.168.1.101"
+```
+
+##### 4. Verify Node-to-IP Mapping
+
+Cross-check your deployment configuration file:
+
+| Node Name | Expected Management IP | Actual Computer Name | Actual IP |
+|-----------|------------------------|----------------------|-----------|
+| NODE1 | 192.168.1.10 | ? | ? |
+| NODE2 | 192.168.1.11 | ? | ? |
+| NODE3 | 192.168.1.12 | ? | ? |
+
+Ensure every node has the correct IP and the correct name.
+
+---
+
+## Additional Information
+
+### When This Validator Runs
+
+This validator only runs during:
+- **Deployment with Static IP**: When deploying with manually assigned management IPs
+- **Pre-Update with Static IP**: Before update operations when using static IP configuration
+
+It does NOT run for:
+- DHCP-based deployments
+- Add-Server scenarios
+
+### Static vs. DHCP IP Management
+
+| Configuration Method | Validator Applies? | Notes |
+|---------------------|-------------------|-------|
+| Static IP (manual assignment) | ✓ Yes | IPs must be pre-configured on nodes |
+| DHCP | ✗ No | IPs assigned dynamically |
+
+### Management IP Requirements
+
+For static IP deployments:
+
+1. **Before deployment**: Management IPs must be assigned to the first management adapter on each node
+2. **Correct adapter**: IP must be on the adapter specified as the first adapter in the management intent
+3. **Connectivity**: All nodes must be able to reach each other via management IPs
+4. **Name resolution**: Computer names must match the node names in the deployment configuration
+
+### Verifying Management IP Configuration
+
+To verify your management IP configuration is correct:
+
+```powershell
+# On each node, verify:
+
+# 1. Computer name
+$env:COMPUTERNAME
+
+# 2. Management IP and adapter
+Get-NetIPAddress -AddressFamily IPv4 |
+ Where-Object { $_.IPAddress -notlike "169.254.*" -and $_.IPAddress -ne "127.0.0.1" } |
+ Select-Object InterfaceAlias, IPAddress, PrefixLength
+
+# 3. Default gateway
+Get-NetRoute -DestinationPrefix "0.0.0.0/0" | Select-Object NextHop
+
+# 4. DNS servers
+Get-DnsClientServerAddress | Where-Object { $_.ServerAddresses.Count -gt 0 }
+
+# 5. WinRM connectivity
+Test-WSMan -ComputerName localhost
+```
+
+From the validator machine, test connectivity to all nodes:
+
+```powershell
+# Test WinRM to each management IP
+$managementIPs = @("192.168.1.10", "192.168.1.11", "192.168.1.12") # Replace with actual IP in your system
+$cred = Get-Credential
+foreach ($ip in $managementIPs) {
+ Write-Host "Testing connection to $ip..." -ForegroundColor Cyan
+ try {
+ $session = New-PSSession -ComputerName $ip -Credential $cred -ErrorAction Stop
+ $computerName = Invoke-Command -Session $session -ScriptBlock { $env:COMPUTERNAME }
+ Write-Host " SUCCESS: Connected to $ip, computer name: $computerName" -ForegroundColor Green
+ Remove-PSSession $session
+ } catch {
+ Write-Host " FAILED: Cannot connect to $ip - $($_.Exception.Message)" -ForegroundColor Red
+ }
+}
+```
+
+### Related Validators
+
+This validator is part of a set of management IP configuration checks:
+- **AzureLocal_Network_Test_NodeManagementIPConnection** (this validator) - Tests connectivity
+- **AzureLocal_Network_Test_Node_ManagementIP_On_Correct_Adapter** - Verifies IP is on the correct adapter
+- **AzureLocal_Network_Test_Node_ManagementIP_Not_Overlap_With_Storage_Subnet** - Verifies no subnet overlap
+
+### Related Documentation
+- [Configure Windows Remote Management](https://learn.microsoft.com/windows/win32/winrm/installation-and-configuration-for-windows-remote-management)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-MgmtIP-Not-Overlap-Storage-Subnet.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-MgmtIP-Not-Overlap-Storage-Subnet.md
new file mode 100644
index 0000000..4b271d5
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-MgmtIP-Not-Overlap-Storage-Subnet.md
@@ -0,0 +1,331 @@
+# AzureLocal_Network_Test_Node_ManagementIP_Not_Overlap_With_Storage_Subnet
+
+
+
+ | Name |
+ AzureLocal_Network_Test_Node_ManagementIP_Not_Overlap_With_Storage_Subnet |
+
+
+ | Severity |
+ Informational: This validator provides diagnostic information. |
+
+
+ | Applicable Scenarios |
+ Deployment (Static IP only), Pre-Update (Static IP only) |
+
+
+
+## Overview
+
+This validator checks that the management IP subnet does not overlap with any storage network subnet. Overlapping subnets can cause routing issues and prevent storage traffic from functioning correctly.
+
+## Requirements
+
+For static IP deployments:
+1. The management IP subnet must be on a different subnet than all storage adapter subnets
+2. Each storage adapter subnet must be unique and isolated from management traffic
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information about subnet overlap.
+
+```json
+{
+ "Name": "AzureLocal_Network_Test_Node_ManagementIP_Not_Overlap_With_Storage_Subnet",
+ "DisplayName": "Test machine management IP is not in the same subnet as any storage network",
+ "Title": "Test machine management IP is not in the same subnet as any storage network",
+ "Status": 1,
+ "Severity": 0,
+ "Description": "Test machine management IP is not in the same subnet as any storage network",
+ "Remediation": "https://learn.microsoft.com/azure-stack/hci/deploy/deployment-tool-checklist",
+ "TargetResourceID": "NODE1, MgmtIPNotOverlapStorageSubnet",
+ "TargetResourceName": "NODE1, MgmtIPNotOverlapStorageSubnet",
+ "TargetResourceType": "MgmtIPNotOverlapStorageSubnet",
+ "Timestamp": "",
+ "AdditionalData": {
+ "Source": "NODE1, MgmtIPNotOverlapStorageSubnet",
+ "Resource": "NODE1, MgmtIPNotOverlapStorageSubnet",
+ "Detail": "Management IP 10.71.1.10 on subnet 10.71.1.0/24 overlaps with storage subnet(s): 10.71.1.0/24, 10.71.2.0/24.",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: Management IP Overlaps with Storage Subnet
+
+**Error Message:**
+```text
+Management IP 10.71.1.10 on subnet 10.71.1.0/24 overlaps with storage subnet(s): 10.71.1.0/24, 10.71.2.0/24.
+```
+
+**Root Cause:** The management IP is configured in the same subnet as one or more storage adapters. This creates a routing conflict where the system cannot determine whether traffic should go through the management adapter or storage adapters.
+
+#### Why This is a Problem
+
+1. **Routing conflicts**: OS cannot determine which adapter to use for traffic in the overlapping subnet
+2. **Storage performance**: Management traffic may interfere with storage performance
+3. **Network isolation**: Best practice is to separate management and storage traffic on different subnets
+4. **Troubleshooting complexity**: Overlap makes network issues harder to diagnose
+
+#### Remediation Steps
+
+##### 1. Identify Current Network Configuration
+
+On the affected node, check current IP configuration:
+
+```powershell
+# View all IP addresses and their subnets
+Get-NetIPAddress -AddressFamily IPv4 |
+ Where-Object { $_.IPAddress -notlike "169.254.*" -and $_.IPAddress -ne "127.0.0.1" } |
+ ForEach-Object {
+ $ip = $_.IPAddress
+ $prefix = $_.PrefixLength
+ # Calculate subnet
+ $ipObj = [IPAddress]$ip
+ $mask = [IPAddress]([math]::pow(2, 32) - [math]::pow(2, (32 - $prefix)))
+ $subnet = ([IPAddress]($ipObj.Address -band $mask.Address)).IPAddressToString
+
+ [PSCustomObject]@{
+ Adapter = $_.InterfaceAlias
+ IP = $ip
+ Prefix = $prefix
+ Subnet = "$subnet/$prefix"
+ }
+ } | Format-Table -AutoSize
+```
+
+Example output showing the problem:
+```
+Adapter IP Prefix Subnet
+------- -- ------ ------
+vManagement(...) 10.71.1.10 24 10.71.1.0/24 ← Management
+vSMB(...#Ethernet 2) 10.71.1.20 24 10.71.1.0/24 ← Storage (OVERLAP!)
+vSMB(...#Ethernet 3) 10.71.2.20 24 10.71.2.0/24 ← Storage
+```
+
+##### 2. Review Your Network Design
+
+Check your deployment configuration and network plan:
+
+**Storage Auto-IP (default):**
+- Storage subnets are automatically assigned as: 10.71.1.0/24, 10.71.2.0/24, 10.71.3.0/24, etc.
+- One subnet per VLAN or per adapter (depending on configuration)
+
+**Static Storage IP:**
+- Storage subnets are defined in the `storageNetworks` section of your deployment configuration
+
+##### 3. Option A: Change Management IP Subnet (Recommended)
+
+Move the management network to a different subnet:
+
+```powershell
+# Example: Change management from 10.71.1.x to 192.168.1.x
+
+# Step 1: Remove current management IP
+$mgmtAdapter = "vManagement(ManagementIntent)" # Or physical adapter name
+Remove-NetIPAddress -InterfaceAlias $mgmtAdapter -Confirm:$false
+Remove-NetRoute -InterfaceAlias $mgmtAdapter -Confirm:$false
+
+# Step 2: Assign new management IP in different subnet
+New-NetIPAddress -InterfaceAlias $mgmtAdapter `
+ -IPAddress "192.168.1.10" `
+ -PrefixLength 24 `
+ -DefaultGateway "192.168.1.1"
+
+# Step 3: Update DNS servers
+Set-DnsClientServerAddress -InterfaceAlias $mgmtAdapter `
+ -ServerAddresses "192.168.1.100", "192.168.1.101"
+
+# Step 4: Verify no overlap
+Get-NetIPAddress -InterfaceAlias $mgmtAdapter
+```
+
+**Important:** If changing management subnet:
+- Update the change on ALL nodes in the cluster
+- Update your deployment configuration file
+- Update DNS records
+- Update any firewall rules or network policies
+- Ensure the new subnet is routable in your network infrastructure
+
+##### 4. Option B: Change Storage IP Subnets
+
+Alternatively, change the storage network subnets (less common):
+
+**For Auto-IP Storage:**
+- Storage subnets are hardcoded to 10.71.x.0/24
+- Cannot easily change without modifying deployment configuration
+- Usually easier to change management subnet instead
+
+**For Static Storage IP:**
+- Update the `storageNetworks` section in your deployment configuration
+- Choose different subnets that don't conflict with management
+
+```json
+{
+ "storageNetworks": [
+ {
+ "name": "Storage1_Network",
+ "networkAdapterName": "Ethernet 2",
+ "vlanId": 711,
+ "storageAdapterIPInfo": [
+ {
+ "physicalNode": "NODE1",
+ "ipv4Address": "172.16.1.10", // Changed from 10.71.1.x
+ "subnetMask": "255.255.255.0"
+ }
+ ]
+ }
+ ]
+}
+```
+
+##### 5. Verify the Fix
+
+After making changes, verify subnets are unique:
+
+```powershell
+# Check all adapter subnets
+$subnets = @{}
+Get-NetIPAddress -AddressFamily IPv4 |
+ Where-Object { $_.IPAddress -notlike "169.254.*" -and $_.IPAddress -ne "127.0.0.1" } |
+ ForEach-Object {
+ $ip = $_.IPAddress
+ $prefix = $_.PrefixLength
+ $ipObj = [IPAddress]$ip
+ $mask = [IPAddress]([math]::pow(2, 32) - [math]::pow(2, (32 - $prefix)))
+ $subnet = ([IPAddress]($ipObj.Address -band $mask.Address)).IPAddressToString + "/$prefix"
+
+ if ($subnets.ContainsKey($subnet)) {
+ $subnets[$subnet] += ", " + $_.InterfaceAlias
+ } else {
+ $subnets[$subnet] = $_.InterfaceAlias
+ }
+ }
+
+# Display results
+$subnets.GetEnumerator()
+```
+
+---
+
+## Additional Information
+
+### Recommended Network Subnet Design
+
+Best practice is to use separate subnet ranges for different traffic types:
+
+| Traffic Type | Recommended Subnet Range | Example | Notes |
+|-------------|------------------------|---------|-------|
+| **Management** | 192.168.x.0/24 or 10.0.x.0/24 | 192.168.1.0/24 | Routable corporate network |
+| **Storage (Auto-IP)** | 10.71.x.0/24 (fixed) | 10.71.1.0/24, 10.71.2.0/24 | Automatically assigned |
+| **Storage (Static)** | 172.16.x.0/24 or 10.72-99.x.0/24 | 172.16.1.0/24, 172.16.2.0/24 | User-defined |
+| **VM/Compute** | As per datacenter design | 10.10.0.0/16 | Depends on workload |
+
+### Example: Good Network Design (No Overlap)
+
+```
+NODE1:
+ vManagement(ManagementIntent) 192.168.1.10/24 ← Management subnet
+ vSMB(StorageIntent#Ethernet 2) 10.71.1.10/24 ← Storage subnet 1
+ vSMB(StorageIntent#Ethernet 3) 10.71.2.10/24 ← Storage subnet 2
+
+NODE2:
+ vManagement(ManagementIntent) 192.168.1.11/24 ← Management subnet
+ vSMB(StorageIntent#Ethernet 2) 10.71.1.11/24 ← Storage subnet 1
+ vSMB(StorageIntent#Ethernet 3) 10.71.2.11/24 ← Storage subnet 2
+```
+
+All subnets are unique - no overlap!
+
+### Example: Bad Network Design (Overlap)
+
+```
+NODE1:
+ vManagement(ManagementIntent) 10.71.1.10/24 ← Management
+ vSMB(StorageIntent#Ethernet 2) 10.71.1.20/24 ← Storage (SAME SUBNET!)
+ vSMB(StorageIntent#Ethernet 3) 10.71.2.20/24 ← Storage
+```
+
+Management and Storage1 are in the same 10.71.1.0/24 subnet - this will cause problems!
+
+### Storage Auto-IP Subnet Assignment
+
+When using storage auto-IP (EnableStorageAutoIP = true):
+- The system automatically assigns storage subnets as 10.71.1.0/24, 10.71.2.0/24, etc.
+- Number of subnets = minimum of (number of storage VLANs, number of storage adapters)
+- IPs assigned within each subnet based on node position
+
+**Therefore:** When using auto-IP, ensure your management subnet is NOT in the 10.71.x.0/24 range.
+
+### Checking Deployment Configuration
+Below example are for reference only. Please double check [latest example](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.azurestackhci/create-cluster-2-node-switched-custom-storageip/azuredeploy.parameters.json) to see if schema is changed.
+
+**For Auto-IP Storage:**
+```json
+{
+ "hostNetwork": {
+ "enableStorageAutoIP": true, // Auto-IP enabled
+ "intents": [
+ {
+ "name": "ManagementIntent",
+ "adapter": ["Ethernet", "Ethernet 1"]
+ },
+ {
+ "name": "StorageIntent",
+ "adapter": ["Ethernet 2", "Ethernet 3"]
+ }
+ ],
+ "storageNetworks": [
+ { "name": "Storage1", "vlanId": 711 }, // Will get 10.71.1.0/24
+ { "name": "Storage2", "vlanId": 712 } // Will get 10.71.2.0/24
+ ]
+ }
+}
+```
+
+**For Static Storage:**
+```json
+{
+ "hostNetwork": {
+ "enableStorageAutoIP": false, // Static IP
+ "storageNetworks": [
+ {
+ "name": "Storage1_Network",
+ "vlanId": 711,
+ "storageAdapterIPInfo": [
+ {
+ "physicalNode": "NODE1",
+ "ipv4Address": "172.16.1.10", // Explicitly defined
+ "subnetMask": "255.255.255.0"
+ }
+ ]
+ }
+ ]
+ }
+}
+```
+
+### Routing and Network Behavior with Overlap
+
+When subnets overlap, Windows networking behavior can be unpredictable:
+
+1. **Metric-based routing**: Windows uses route metrics to decide which adapter to use
+2. **Load balancing**: May attempt to balance traffic across adapters (not desired for storage)
+3. **Failover**: May switch adapters if one becomes unavailable
+4. **Performance impact**: Storage traffic may go through wrong adapter, reducing performance
+
+### Related Validators
+
+This validator is part of a set of management IP configuration checks:
+- **AzureLocal_Network_Test_NodeManagementIPConnection** - Tests connectivity to management IP
+- **AzureLocal_Network_Test_Node_ManagementIP_On_Correct_Adapter** - Verifies IP on correct adapter
+- **AzureLocal_Network_Test_Node_ManagementIP_Not_Overlap_With_Storage_Subnet** (this validator) - Verifies no subnet overlap
+
+### Related Documentation
+- [Network reference patterns](https://learn.microsoft.com/azure-stack/hci/plan/network-patterns-overview)
+- [Custom IPs for storage in Azure Local](https://learn.microsoft.com/en-us/azure/azure-local/plan/cloud-deployment-network-considerations#custom-ips-for-storage)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-MgmtIP-On-Correct-Adapter.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-MgmtIP-On-Correct-Adapter.md
new file mode 100644
index 0000000..3fe5940
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-MgmtIP-On-Correct-Adapter.md
@@ -0,0 +1,271 @@
+# AzureLocal_Network_Test_Node_ManagementIP_On_Correct_Adapter
+
+
+
+ | Name |
+ AzureLocal_Network_Test_Node_ManagementIP_On_Correct_Adapter |
+
+
+ | Severity |
+ Informational: This validator provides diagnostic information. |
+
+
+ | Applicable Scenarios |
+ Deployment (Static IP only), Pre-Update (Static IP only) |
+
+
+
+## Overview
+
+This validator checks that the management IP address configured on each node is assigned to the correct network adapter - specifically, the first adapter defined in the management intent or its corresponding virtual adapter (vManagement).
+
+## Requirements
+
+For static IP deployments:
+1. Management IP must be configured on the first physical adapter specified in the management intent
+2. OR, if a virtual switch has already been created, the management IP must be on the vManagement virtual adapter
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information about the adapter configuration.
+
+```json
+{
+ "Name": "AzureLocal_Network_Test_Node_ManagementIP_On_Correct_Adapter",
+ "DisplayName": "Test machine management IP is configured on correct adapter",
+ "Title": "Test machine management IP is configured on correct adapter",
+ "Status": 1,
+ "Severity": 0,
+ "Description": "Test machine management IP is configured on correct adapter",
+ "Remediation": "https://learn.microsoft.com/azure-stack/hci/deploy/deployment-tool-checklist",
+ "TargetResourceID": "NODE1, MgmtIPOnCorrectAdapter",
+ "TargetResourceName": "NODE1, MgmtIPOnCorrectAdapter",
+ "TargetResourceType": "MgmtIPOnCorrectAdapter",
+ "Timestamp": "",
+ "AdditionalData": {
+ "Source": "NODE1, MgmtIPOnCorrectAdapter",
+ "Resource": "NODE1, MgmtIPOnCorrectAdapter",
+ "Detail": "Management IP is not found on expected adapter Ethernet or vManagement(ManagementIntent) on server NODE1.",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: Management IP Not on Correct Adapter
+
+**Error Message:**
+```text
+Management IP is not found on expected adapter Ethernet or vManagement(ManagementIntent) on server NODE1.
+```
+
+**Root Cause:** The management IP address is configured on the wrong network adapter. It should be on:
+- The first physical adapter defined in the management intent (e.g., "Ethernet"), OR
+- The corresponding virtual management adapter "vManagement(intentName)" if a virtual switch has been created
+
+#### Remediation Steps
+
+##### 1. Identify the Current IP Configuration
+
+On the affected node, check which adapter has the management IP:
+
+```powershell
+# View all IP addresses and their adapters
+Get-NetIPAddress -AddressFamily IPv4 |
+ Where-Object { $_.IPAddress -notlike "169.254.*" -and $_.IPAddress -ne "127.0.0.1" } |
+ Select-Object InterfaceAlias, IPAddress, PrefixLength, AddressState |
+ Format-Table -AutoSize
+```
+
+##### 2. Check Your Management Intent Configuration
+
+Review your deployment configuration to identify the expected adapter:
+
+```json
+{
+ "intents": [
+ {
+ "name": "ManagementIntent",
+ "trafficType": ["Management"],
+ "adapter": ["Ethernet", "Ethernet 2"] // First adapter is "Ethernet"
+ }
+ ]
+}
+```
+
+The management IP should be on the **first adapter** in the list ("Ethernet" in this example).
+
+##### 3. Option A: Move IP to Correct Physical Adapter (No Virtual Switch)
+
+If no virtual switch has been created yet, move the IP to the correct physical adapter:
+
+```powershell
+# Step 1: Identify the current and target adapters
+$currentAdapter = "Ethernet 3" # Adapter currently has the IP
+$targetAdapter = "Ethernet" # First adapter from management intent
+$managementIP = "192.168.1.10" # The management IP
+$prefixLength = 24
+$gateway = "192.168.1.1"
+$dnsServers = @("192.168.1.100", "192.168.1.101")
+
+# Step 2: Remove IP from current adapter
+Remove-NetIPAddress -InterfaceAlias $currentAdapter -Confirm:$false -ErrorAction SilentlyContinue
+Remove-NetRoute -InterfaceAlias $currentAdapter -Confirm:$false -ErrorAction SilentlyContinue
+
+# Step 3: Assign IP to correct adapter
+New-NetIPAddress -InterfaceAlias $targetAdapter `
+ -IPAddress $managementIP `
+ -PrefixLength $prefixLength `
+ -DefaultGateway $gateway
+
+# Step 4: Set DNS servers
+Set-DnsClientServerAddress -InterfaceAlias $targetAdapter -ServerAddresses $dnsServers
+
+# Step 5: Verify configuration
+Get-NetIPConfiguration -InterfaceAlias $targetAdapter
+```
+
+##### 4. Option B: IP on Virtual Adapter (Virtual Switch Already Exists)
+
+If you've already created a virtual switch via Network ATC, the management IP should be on the vManagement virtual adapter:
+
+```powershell
+# Check if virtual switch and vManagement adapter exist
+Get-VMSwitch -SwitchType External
+Get-VMNetworkAdapter -ManagementOS | Where-Object { $_.Name -like "vManagement*" }
+
+# If vManagement exists but doesn't have the IP, assign it
+$vMgmtAdapter = Get-VMNetworkAdapter -ManagementOS | Where-Object { $_.Name -like "vManagement*" } | Select-Object -First 1
+$vAdapterName = "vManagement(ManagementIntent)" # Update with actual name
+
+# Assign IP to vManagement adapter
+New-NetIPAddress -InterfaceAlias $vAdapterName `
+ -IPAddress "192.168.1.10" `
+ -PrefixLength 24 `
+ -DefaultGateway "192.168.1.1"
+
+Set-DnsClientServerAddress -InterfaceAlias $vAdapterName `
+ -ServerAddresses "192.168.1.100", "192.168.1.101"
+```
+
+##### 5. Verify the Configuration
+
+After making changes, verify the management IP is on the correct adapter:
+
+```powershell
+# Expected adapter name from configuration (first adapter in management intent)
+$expectedPhysicalAdapter = "Ethernet"
+$expectedVirtualAdapter = "vManagement(ManagementIntent)" # If virtual switch exists
+
+# Check physical adapter
+Get-NetIPConfiguration -InterfaceAlias $expectedPhysicalAdapter -ErrorAction SilentlyContinue
+
+# Check virtual adapter (if applicable)
+Get-NetIPConfiguration -InterfaceAlias $expectedVirtualAdapter -ErrorAction SilentlyContinue
+
+# Verify connectivity
+Test-NetConnection -ComputerName "192.168.1.1" -InformationLevel Detailed # Gateway
+```
+
+---
+
+## Additional Information
+
+### Understanding Management Adapter Requirements
+
+**Physical Adapter (Pre-Deployment):**
+- Before Network ATC creates the virtual switch, the management IP must be on the first physical adapter defined in the management intent
+- This allows initial connectivity during deployment
+
+**Virtual Adapter (Post-ATC):**
+- After Network ATC creates a SET (Switch Embedded Teaming) virtual switch, it creates a vManagement virtual adapter
+- The management IP is then moved to the vManagement adapter
+- The naming pattern is `vManagement(IntentName)` where IntentName is the name of the management intent
+
+### Management Adapter Selection
+
+The validator expects the management IP on the **first adapter** in the management intent adapter list:
+
+```json
+{
+ "intents": [
+ {
+ "name": "ManagementIntent",
+ "trafficType": ["Management"],
+ "adapter": ["Ethernet", "Ethernet 2"]
+ // ↑ Management IP should be on "Ethernet" (first in list)
+ }
+ ]
+}
+```
+
+### Checking Current Adapter Configuration
+
+To see all network adapters and their IPs:
+
+```powershell
+# List all adapters with IP configuration
+Get-NetAdapter | Sort-Object Name | ForEach-Object {
+ $ipConfig = Get-NetIPAddress -InterfaceAlias $_.Name -AddressFamily IPv4 -ErrorAction SilentlyContinue |
+ Where-Object { $_.IPAddress -notlike "169.254.*" }
+
+ [PSCustomObject]@{
+ Adapter = $_.Name
+ Status = $_.Status
+ IP = if ($ipConfig) { $ipConfig.IPAddress } else { "None" }
+ Type = if ($_.Name -like "vManagement*") { "Virtual (ATC)" }
+ elseif ($_.Name -like "vEthernet*") { "Virtual" }
+ else { "Physical" }
+ }
+} | Format-Table -AutoSize
+```
+
+### Common Scenarios
+
+| Scenario | Expected Adapter | Notes |
+|----------|-----------------|-------|
+| **Pre-deployment** | First physical adapter from intent | Before Network ATC runs |
+| **During deployment** | Transitioning from physical to virtual | IP may be on either |
+| **Post-deployment** | vManagement(IntentName) | After Network ATC creates virtual switch |
+| **Pre-Update** | vManagement(IntentName) | Should already be configured |
+
+### Virtual Adapter Naming
+
+Network ATC creates virtual adapters with specific naming patterns:
+
+| Intent Type | Virtual Adapter Name | Example |
+|------------|---------------------|---------|
+| Management Only | `vManagement(IntentName)` | `vManagement(ManagementIntent)` |
+| Storage in Converged | `vSMB(IntentName#AdapterName)` | `vSMB(ConvergedIntent#Ethernet2)` |
+
+### Verification After Deployment
+
+After successful deployment, verify the management configuration:
+
+```powershell
+# Check that vManagement adapter exists and has correct IP
+$vMgmt = Get-VMNetworkAdapter -ManagementOS | Where-Object { $_.Name -like "vManagement*" }
+if ($vMgmt) {
+ Write-Host "vManagement adapter found: $($vMgmt.Name)" -ForegroundColor Green
+ Get-NetIPConfiguration -InterfaceAlias $vMgmt.Name
+} else {
+ Write-Host "vManagement adapter not found - check Network ATC status" -ForegroundColor Yellow
+}
+
+# Check Network ATC intent status
+Get-NetIntent | Format-List Name, IntentType, IsComputeIntentSet, IsManagementIntentSet, IsStorageIntentSet
+```
+
+### Related Validators
+
+This validator is part of a set of management IP configuration checks:
+- **AzureLocal_Network_Test_NodeManagementIPConnection** - Tests connectivity to management IP
+- **AzureLocal_Network_Test_Node_ManagementIP_On_Correct_Adapter** (this validator) - Verifies IP on correct adapter
+- **AzureLocal_Network_Test_Node_ManagementIP_Not_Overlap_With_Storage_Subnet** - Verifies no subnet overlap
+
+### Related Documentation
+- [Host network requirements](https://learn.microsoft.com/en-us/azure/azure-local/concepts/host-network-requirements)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-MgmtIp-In-InfraSubnet.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-MgmtIp-In-InfraSubnet.md
new file mode 100644
index 0000000..fead0e7
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-MgmtIp-In-InfraSubnet.md
@@ -0,0 +1,189 @@
+# AzStackHci_Network_Test_Validity_MgmtIp_In_Infra_Subnet
+
+
+
+ | Name |
+ AzStackHci_Network_Test_Validity_MgmtIp_In_Infra_Subnet |
+
+
+ | Severity |
+ Critical: This validator will block operations until remediated. |
+
+
+ | Applicable Scenarios |
+ Deployment, Add-Server |
+
+
+
+## Overview
+
+This validator checks that the management IP address assigned to each node is in the same subnet as the infrastructure IP pool. All management IPs and the infrastructure IP pool must be in the same subnet to ensure proper network communication within the cluster.
+
+## Requirements
+
+Each node must meet the following requirement:
+1. The management IP address must be in the same subnet (CIDR) as the infrastructure IP pool
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information about the subnet mismatch. The `Source` field identifies the node, and the `TargetResourceID` shows both the management IP subnet and the infrastructure subnet.
+
+```json
+{
+ "Name": "AzStackHci_Network_Test_Validity_MgmtIp_In_Infra_Subnet",
+ "DisplayName": "Test Validity Management IP in same infra subnet as IP pools",
+ "Title": "Test Validity Management IP in same infra subnet as IP pools",
+ "Status": 1,
+ "Severity": 2,
+ "Description": "Checking management IPs are in same subnet as infra IP pool",
+ "Remediation": "https://aka.ms/hci-envch",
+ "TargetResourceID": "10.0.1.10/24-10.0.2.0/24",
+ "TargetResourceName": "ManagementIpIpPoolCIDR",
+ "TargetResourceType": "ManagementIpIpPoolCIDR",
+ "Timestamp": "\\/Date(timestamp)\\/",
+ "AdditionalData": {
+ "Source": "NODE1",
+ "Resource": "ManagementIpIpPoolCIDR",
+ "Detail": "Management IP [10.0.1.10] is not in the same subnet as infrastructure IP pool. Management subnet: 10.0.1.0/24, Infrastructure subnet: 10.0.2.0/24",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: Management IP is in a different subnet than infrastructure IP pool
+
+**Error Message:**
+```text
+Management IP [10.0.1.10] is not in the same subnet as infrastructure IP pool. Management subnet: 10.0.1.0/24, Infrastructure subnet: 10.0.2.0/24
+```
+
+**Root Cause:** The node's management IP address is in a different subnet than the infrastructure IP pool. This prevents proper network communication between the node and cluster resources that receive IP addresses from the infrastructure pool.
+
+#### Remediation Steps
+
+You have two options to remediate this issue:
+
+##### Option 1: Change the node's management IP address to match the infrastructure subnet (Recommended)
+
+Reconfigure the node's management IP to be in the same subnet as the infrastructure IP pool.
+
+1. Identify the infrastructure subnet from the error message (e.g., `10.0.2.0/24`).
+
+2. Choose a new IP address for the node that is:
+ - In the same subnet as the infrastructure IP pool
+ - Outside the infrastructure IP pool range (to avoid conflicts)
+ - Not in use by any other device on the network
+
+3. Change the management IP address on the affected node:
+
+ ```powershell
+ # Make sure $adapterName contains the right management adapter from your system
+ $adapterName = "myAdapterName"
+
+ # Remove the old IP address
+ $oldIP = "10.0.1.10" # Replace with the current management IP
+ Remove-NetIPAddress -InterfaceAlias $adapterName -IPAddress $oldIP -Confirm:$false
+
+ # Remove old default gateway if needed
+ Remove-NetRoute -InterfaceAlias $adapterName -DestinationPrefix "0.0.0.0/0" -Confirm:$false -ErrorAction SilentlyContinue
+
+ # Add the new IP address in the correct subnet
+ $newIP = "10.0.2.10" # Replace with new IP in the infrastructure subnet
+ $prefixLength = 24 # Replace with your subnet prefix length
+ $defaultGateway = "10.0.2.1" # Replace with the gateway for the infrastructure subnet
+
+ New-NetIPAddress -InterfaceAlias $adapterName -IPAddress $newIP -PrefixLength $prefixLength -DefaultGateway $defaultGateway
+ ```
+
+4. Update DNS server configuration if needed:
+
+ ```powershell
+ $dnsServers = @("10.0.2.1") # Replace with DNS servers accessible from the new subnet
+ Set-DnsClientServerAddress -InterfaceAlias $adapterName -ServerAddresses $dnsServers
+ ```
+
+5. Verify the new IP address is configured correctly:
+
+ ```powershell
+ Get-NetIPAddress -InterfaceAlias $adapterName -AddressFamily IPv4 | Where-Object { $_.PrefixOrigin -eq "Manual" }
+ Get-NetIPConfiguration -InterfaceAlias $adapterName
+ ```
+
+6. Verify network connectivity:
+
+ ```powershell
+ # Test connectivity to default gateway
+ Test-NetConnection -ComputerName "10.0.2.1"
+ ```
+
+7. Retry the validation to ensure the issue is resolved.
+
+> **Important**: Changing the management IP address will temporarily disconnect the node from the network. Ensure you have console access or remote management (e.g., iLO, iDRAC) before making this change.
+
+##### Option 2: Adjust the infrastructure IP pool subnet
+
+Reconfigure the infrastructure IP pool to be in the same subnet as the node management IPs.
+
+1. Identify the current management IP subnet of all nodes (e.g., `10.0.1.0/24`).
+
+2. Adjust the infrastructure IP pool to use addresses in the same subnet:
+ - Choose a range within the management subnet that doesn't conflict with node management IPs
+ - For example, if nodes use `10.0.1.1-10.0.1.10`, set the pool to `10.0.1.100-10.0.1.200`
+
+3. Update the infrastructure IP pool configuration through your deployment method (Azure portal, PowerShell, or configuration files).
+
+4. Ensure the subnet mask is consistent across all configuration:
+
+ ```powershell
+ # Example: Verify all nodes are on the same subnet
+ Get-NetIPAddress -AddressFamily IPv4 | Where-Object { $_.PrefixOrigin -eq "Manual" } | Select-Object IPAddress, PrefixLength
+ ```
+
+5. Retry the validation to ensure the issue is resolved.
+
+> **Note**: This option requires modifying your deployment configuration before starting or continuing the deployment.
+
+---
+
+### Understanding CIDR Notation
+
+The error message shows subnets in CIDR notation (e.g., `10.0.1.0/24`). The number after the slash indicates how many bits are used for the network portion:
+
+- `/24` = 255.255.255.0 (most common for small networks)
+- `/23` = 255.255.254.0
+- `/22` = 255.255.252.0
+
+Check [Classless Inter-Domain Routing](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) for more information.
+
+---
+
+## Additional Information
+
+### Best Practices
+
+- Use consistent subnet across all management IPs and infrastructure IP pools
+- Document your IP addressing scheme, including:
+ - Node management IP range
+ - Infrastructure IP pool range
+ - Subnet mask (prefix length)
+ - Default gateway
+ - DNS servers
+- Use IP Address Management (IPAM) tools to plan and track IP address assignments
+
+### Common Subnet Configurations
+
+| Prefix Length | Subnet Mask | Usable IPs | Common Use Case |
+|---------------|-----------------|------------|------------------------------------|
+| /24 | 255.255.255.0 | 254 | Small clusters (up to ~250 nodes) |
+| /23 | 255.255.254.0 | 510 | Medium clusters |
+| /22 | 255.255.252.0 | 1022 | Large clusters or future growth |
+
+### Related Documentation
+
+- [Azure Local Network Requirements](https://learn.microsoft.com/azure-stack/hci/concepts/host-network-requirements)
+- [IP addressing for Azure Local](https://learn.microsoft.com/azure-stack/hci/plan/network-patterns-overview#ip-addressing)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-MgmtIp-NotIn-InfraPool.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-MgmtIp-NotIn-InfraPool.md
new file mode 100644
index 0000000..6b8e656
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-MgmtIp-NotIn-InfraPool.md
@@ -0,0 +1,136 @@
+# AzStackHci_Network_Test_Validity_MgmtIp_NotIn_Infra_Pool
+
+
+
+ | Name |
+ AzStackHci_Network_Test_Validity_MgmtIp_NotIn_Infra_Pool |
+
+
+ | Severity |
+ Critical: This validator will block operations until remediated. |
+
+
+ | Applicable Scenarios |
+ Deployment, Add-Server |
+
+
+
+## Overview
+
+This validator checks that the management IP address assigned to each node does not overlap with the infrastructure IP pool range. The infrastructure IP pool is reserved for dynamic allocation of IP addresses to cluster resources, and node management IPs must be outside this range to avoid conflicts.
+
+## Requirements
+
+Each node must meet the following requirement:
+1. The management IP address must be outside the infrastructure IP pool range (StartingAddress to EndingAddress)
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information about which node's management IP is conflicting with the IP pool. The `Source` field identifies the node, and the `TargetResourceID` shows the IP pool range.
+
+```json
+{
+ "Name": "AzStackHci_Network_Test_Validity_MgmtIp_NotIn_Infra_Pool",
+ "DisplayName": "Test Validity Management IP not in Infra Pool",
+ "Title": "Test Validity Management IP not in Infra Pool",
+ "Status": 1,
+ "Severity": 2,
+ "Description": "Checking management IPs are not in infra IP pool",
+ "Remediation": "https://aka.ms/hci-envch",
+ "TargetResourceID": "10.0.1.100-10.0.1.200",
+ "TargetResourceName": "ManagementIpIpPoolConfiguration",
+ "TargetResourceType": "ManagementIpIpPoolConfiguration",
+ "Timestamp": "\\/Date(timestamp)\\/",
+ "AdditionalData": {
+ "Source": "NODE1",
+ "Resource": "NodeManagementIP",
+ "Detail": "Management IP [10.0.1.150] is in the infrastructure IP pool range [10.0.1.100-10.0.1.200]",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: Management IP is within infrastructure IP pool range
+
+**Error Message:**
+```text
+Management IP [10.0.1.150] is in the infrastructure IP pool range [10.0.1.100-10.0.1.200]
+```
+
+**Root Cause:** The node's management IP address falls within the infrastructure IP pool range. This creates a conflict because the IP address could be dynamically assigned to a cluster resource, resulting in IP address conflicts.
+
+#### Remediation Steps
+
+You have two options to remediate this issue:
+
+##### Option 1: Change the node's management IP address (Recommended)
+
+Reconfigure the node to use a management IP address outside the infrastructure IP pool range.
+
+1. Identify the current management IP and the infrastructure IP pool range from the error message.
+
+2. Choose a new IP address for the node that is:
+ - Outside the infrastructure IP pool range
+ - In the same subnet as the infrastructure IP pool
+ - Not in use by any other device on the network
+
+3. Change the management IP address on the affected node:
+
+ ```powershell
+ # Remove the old IP address
+ # Make sure $adapterName contains the right adapter name in your system
+ $adapterName = "myAdapterName"
+ $oldIP = "10.0.1.150" # Replace with the current management IP
+ Remove-NetIPAddress -InterfaceAlias $adapterName -IPAddress $oldIP -Confirm:$false
+
+ # Add the new IP address
+ $newIP = "10.0.1.50" # Replace with new IP outside the pool range
+ $prefixLength = 24 # Replace with your subnet prefix length
+ $defaultGateway = "10.0.1.1" # Replace with your gateway
+
+ New-NetIPAddress -InterfaceAlias $adapterName -IPAddress $newIP -PrefixLength $prefixLength -DefaultGateway $defaultGateway
+ ```
+
+4. Verify the new IP address is configured correctly:
+
+ ```powershell
+ Get-NetIPAddress -InterfaceAlias $adapterName -AddressFamily IPv4 | Where-Object { $_.PrefixOrigin -eq "Manual" }
+ Get-NetIPConfiguration -InterfaceAlias $adapterName
+ ```
+
+5. Retry the validation to ensure the issue is resolved.
+
+> **Important**: Changing the management IP address will temporarily disconnect the node from the network. Ensure you have console access or remote management (e.g., iLO, iDRAC) before making this change.
+
+##### Option 2: Adjust the infrastructure IP pool range
+
+Reconfigure the infrastructure IP pool to exclude the node's management IP address.
+
+1. Identify the current management IP addresses of all nodes in the cluster.
+
+2. Adjust the infrastructure IP pool range to exclude these addresses
+
+3. Update the infrastructure IP pool configuration through your deployment method (Azure portal, ARM template).
+
+4. Retry the validation to ensure the issue is resolved.
+
+> **Note**: This option require modifying your deployment configuration files or ARM template parameters before starting the deployment.
+
+---
+
+## Additional Information
+
+### Best Practices
+
+- Reserve a separate range of IP addresses for node management IPs outside the infrastructure IP pool
+- Document the IP addressing scheme for your Azure Local cluster
+- Use IP Address Management (IPAM) tools to track IP address assignments and avoid conflicts
+
+### Related Documentation
+
+- [Azure Local Network Requirements](https://learn.microsoft.com/azure-stack/hci/concepts/host-network-requirements)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-NetworkAdapter-DriverConsistency.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-NetworkAdapter-DriverConsistency.md
index 615171f..ae60f2f 100644
--- a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-NetworkAdapter-DriverConsistency.md
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-NetworkAdapter-DriverConsistency.md
@@ -26,7 +26,7 @@ Adapters that are part of any one Network Intent must meet the following require
1. Must exist on the system
2. Must use the same driver version across all nodes in the cluster
-See [Azure Local - Host Network Requirements](https://docs.azure.cn/en-us/azure-local/concepts/host-network-requirements#driver-requirements) for more details.
+See [Azure Local - Host Network Requirements](https://learn.microsoft.com/azure-stack/hci/concepts/host-network-requirements#driver-requirements) for more details.
## Troubleshooting Steps
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-NetworkIntentRequirement.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-NetworkIntentRequirement.md
new file mode 100644
index 0000000..03058f2
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-NetworkIntentRequirement.md
@@ -0,0 +1,248 @@
+# AzStackHci_Network_Test_NetworkIntentRequirement
+
+
+
+ | Name |
+ AzStackHci_Network_Test_NetworkIntentRequirement |
+
+
+ | Severity |
+ Critical: This validator will block operations until remediated. |
+
+
+ | Applicable Scenarios |
+ Deployment |
+
+
+
+## Overview
+
+This validator checks that Rack Aware clusters have exactly one storage-only intent defined. Rack Aware deployments require a dedicated storage intent to ensure proper storage network configuration across racks.
+
+## Requirements
+
+For Rack Aware clusters:
+1. Exactly one storage-only intent must be defined (TrafficType contains "Storage" only, without "Management" or "Compute")
+
+For Standard or Stretch clusters:
+- This validation is skipped (not applicable)
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information about storage intent configuration.
+
+```json
+{
+ "Name": "AzStackHci_Network_Test_NetworkIntentRequirement",
+ "DisplayName": "Test host network intent requirements for Rack Aware cluster",
+ "Title": "Test host network intent requirements for Rack Aware cluster",
+ "Status": 1,
+ "Severity": 2,
+ "Description": "Test that only one storage-only intent is present for Rack Aware cluster",
+ "Remediation": "",
+ "TargetResourceID": "NetworkIntentRequirement",
+ "TargetResourceName": "NetworkIntentRequirement",
+ "TargetResourceType": "NetworkIntentRequirement",
+ "Timestamp": "",
+ "AdditionalData": {
+ "Source": "",
+ "Resource": "NetworkIntentRequirement",
+ "Detail": "No storage-only intent is present for Rack Aware cluster.",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: No Storage-Only Intent for Rack Aware Cluster
+
+**Error Message:**
+```text
+No storage-only intent is present for Rack Aware cluster.
+```
+
+**Root Cause:** The Rack Aware cluster deployment is missing a dedicated storage-only intent. Rack Aware clusters require a storage-only intent to ensure proper storage traffic isolation and routing across racks.
+
+#### Remediation Steps
+
+##### Verify Current Intent Configuration
+
+1. Check existing intents in the deployment configuration:
+
+ ```powershell
+ # If intents are already created, check them
+ Get-NetIntent | Select-Object IntentName, TrafficType, Adapter | Format-Table -AutoSize
+ ```
+
+2. Identify if any intent is storage-only:
+
+ ```powershell
+ # Check for storage-only intents
+ Get-NetIntent | Where-Object {
+ $_.TrafficType -contains "Storage" -and
+ $_.TrafficType -notcontains "Management" -and
+ $_.TrafficType -notcontains "Compute"
+ }
+ ```
+
+##### Add Storage-Only Intent
+
+For Rack Aware deployments, create a dedicated storage-only intent:
+
+1. Identify adapters to use for storage traffic:
+ - Use high-speed adapters (10Gbps+, preferably 25Gbps or higher)
+ - Use RDMA-capable adapters if possible
+ - Ensure adapters are available on all nodes
+
+2. Add the storage-only intent to your deployment configuration:
+
+ **During deployment (before cluster creation):**
+ - Update your deployment configuration file or parameters
+ - Add a storage-only intent definition
+
+ **Example intent configuration:**
+ ```powershell
+ # Example: Storage-only intent for Rack Aware cluster
+ $storageIntent = @{
+ Name = "StorageIntent"
+ Adapter = @("Ethernet 2", "Ethernet 3") # Replace with your storage adapter names
+ TrafficType = @("Storage")
+ }
+ ```
+
+3. If using deployment configuration files (JSON/YAML), add the storage intent:
+
+ **Example JSON configuration:**
+ ```json
+ {
+ "intents": [
+ {
+ "name": "ManagementComputeIntent",
+ "trafficType": ["Management", "Compute"],
+ "adapter": ["Ethernet", "Ethernet 2"]
+ },
+ {
+ "name": "StorageIntent",
+ "trafficType": ["Storage"],
+ "adapter": ["Ethernet 3", "Ethernet 4"]
+ }
+ ]
+ }
+ ```
+
+4. If using PowerShell deployment, add the intent during cluster creation:
+
+ ```powershell
+ # During cluster deployment, add storage-only intent
+ Add-NetIntent -ClusterName "MyCluster" `
+ -Name "StorageIntent" `
+ -AdapterName @("Ethernet 3", "Ethernet 4") `
+ -Storage
+ ```
+
+##### Retry Deployment
+
+After adding the storage-only intent to your deployment configuration, retry the deployment operation.
+
+---
+
+### Failure: Multiple Storage-Only Intents for Rack Aware Cluster
+
+**Error Message:**
+```text
+More than 1 storage-only intents are present for Rack Aware cluster.
+```
+
+**Root Cause:** The deployment configuration includes more than one storage-only intent. Rack Aware clusters should have exactly one storage-only intent.
+
+#### Remediation Steps
+
+1. Review your deployment configuration to identify all storage-only intents.
+
+2. Determine which storage intent should be used and remove the others from your deployment configuration.
+
+3. Update the deployment configuration to include only one storage-only intent.
+
+4. Retry the deployment operation.
+
+---
+
+## Additional Information
+
+### Understanding Rack Aware Clusters
+
+Rack Aware clusters are designed for:
+- **Multi-rack deployments** where cluster nodes span multiple physical racks
+- **Fault domain awareness** based on rack placement
+- **Improved resilience** by distributing resources across racks
+
+### Why Storage-Only Intent is Required for Rack Aware
+
+Rack Aware clusters require a dedicated storage-only intent because:
+1. **Cross-rack storage traffic** must be properly routed
+2. **Storage network isolation** prevents interference from other traffic
+3. **Performance optimization** for storage across rack boundaries
+4. **Fault tolerance** ensures storage connectivity even if a rack has issues
+
+### Network Intent Patterns for Rack Aware
+
+**Recommended pattern for Rack Aware:**
+
+| Intent Name | Traffic Types | Adapters | Purpose |
+|------------|---------------|----------|---------|
+| ManagementComputeIntent | Management, Compute | eth0, eth1 | Management and VM traffic |
+| StorageIntent | Storage | eth2, eth3 | Dedicated storage traffic |
+
+**Not recommended (converged):**
+
+| Intent Name | Traffic Types | Adapters | Issue |
+|------------|---------------|----------|-------|
+| ConvergedIntent | Management, Compute, Storage | eth0, eth1 | Not allowed for Rack Aware |
+
+### Comparing Cluster Patterns
+
+| Cluster Pattern | Storage Intent Requirement |
+|----------------|---------------------------|
+| **Standard** | Storage can be converged or dedicated (no requirement) |
+| **Stretch** | Storage can be converged or dedicated (no requirement) |
+| **Rack Aware** | Must have exactly one storage-only intent (not converged) |
+
+### Checking Cluster Pattern
+
+To verify your cluster pattern:
+
+```powershell
+# Check cluster configuration
+# The cluster pattern is typically defined during deployment configuration
+# Check your deployment parameters or configuration file
+```
+
+### Storage-Only Intent Validation
+
+A storage-only intent must:
+- **Include** "Storage" in TrafficType
+- **Exclude** "Management" from TrafficType
+- **Exclude** "Compute" from TrafficType
+
+```powershell
+# Validate an intent is storage-only
+$intent = Get-NetIntent -Name "StorageIntent"
+$isStorageOnly = ($intent.TrafficType -contains "Storage") -and
+ ($intent.TrafficType -notcontains "Management") -and
+ ($intent.TrafficType -notcontains "Compute")
+
+if ($isStorageOnly) {
+ Write-Host "✓ Intent is storage-only" -ForegroundColor Green
+} else {
+ Write-Host "✗ Intent is NOT storage-only" -ForegroundColor Red
+ Write-Host "Traffic Types: $($intent.TrafficType -join ', ')"
+}
+```
+
+### Related Documentation
+
+- [Manage Network ATC](https://learn.microsoft.com/azure-stack/hci/manage/manage-network-atc)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-NewNode-Duplicate-IP.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-NewNode-Duplicate-IP.md
new file mode 100644
index 0000000..9f87ea5
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-NewNode-Duplicate-IP.md
@@ -0,0 +1,186 @@
+# AzStackHci_Network_Test_New_Node_Validity_Duplicate_IP
+
+
+
+ | Name |
+ AzStackHci_Network_Test_New_Node_Validity_Duplicate_IP |
+
+
+ | Severity |
+ Critical: This validator will block operations until remediated. |
+
+
+ | Applicable Scenarios |
+ Add-Server |
+
+
+
+## Overview
+
+This validator checks that the management IP address of the new node being added to the cluster is not already defined in the ECE configuration or has duplication in the new nodes itself (if multiple nodes are added at the same time).
+Each node in the cluster must have a unique management IP address.
+
+## Requirements
+
+The new node must meet the following requirement:
+1. The node's management IP address must be unique and not used by any other node in the cluster
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information about whether duplicate IPs were found. The `TargetResourceID` shows the new node's management IP address.
+
+```json
+{
+ "Name": "AzStackHci_Network_Test_New_Node_Validity_Duplicate_IP",
+ "DisplayName": "Test New Node Configuration Duplicate IP",
+ "Title": "Test New Node Configuration Duplicate IP",
+ "Status": 1,
+ "Severity": 2,
+ "Description": "Checking New Node IP is not a duplicate",
+ "Remediation": "https://aka.ms/hci-envch",
+ "TargetResourceID": "10.0.1.50",
+ "TargetResourceName": "IPAddress",
+ "TargetResourceType": "IPAddress",
+ "Timestamp": "\\/Date(timestamp)\\/",
+ "AdditionalData": {
+ "Source": "NodeAndManagementIPMapping",
+ "Resource": "NodeManagementIPs",
+ "Detail": "Duplicate IPs found for Node Management IPs",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: Duplicate IPs Found for Node Management IPs
+
+**Root Cause:** Multiple nodes in the cluster (including the new node being added) are configured with the same management IP address. This creates network conflicts and prevents proper cluster communication.
+
+#### Remediation Steps
+
+##### Confirmation of Duplicate IPs
+
+1. You could use below command to confirm the management IP addresses of all existing cluster nodes and new node to be added:
+
+ ```powershell
+ # Run on each node
+ $computerName = $env:COMPUTERNAME
+ $mgmtIP = (Get-NetIPConfiguration | Where-Object { $null -ne $_.IPv4DefaultGateway -and $_.NetAdapter.Status -eq "Up" }).IPv4Address.IPAddress
+ Write-Host "Node: $computerName, Management IP: $mgmtIP"
+ ```
+
+##### Change the New Node's Management IP Address
+
+Once you've confirmed the duplicate, change the new node's management IP to a unique address.
+
+1. Choose a new IP address that is:
+ - Unique (not used by any existing cluster node)
+ - Outside the infrastructure IP pool range
+ - In the same subnet as existing cluster nodes
+ - Not in use by any other device on the network
+
+2. On the new node, change the management IP address:
+
+ ```powershell
+ # Get the management adapter name
+ $adapterName = "myAdapterName" # Replace with the actual adapter name in your system
+
+ # Remove the duplicate IP address
+ $duplicateIP = "10.0.1.50" # Replace with the duplicate IP
+ Remove-NetIPAddress -InterfaceAlias $adapterName -IPAddress $duplicateIP -Confirm:$false
+
+ # Add a unique IP address
+ $newUniqueIP = "10.0.1.55" # Replace with a unique IP
+ $prefixLength = 24 # Replace with your subnet prefix length
+ $defaultGateway = "10.0.1.1" # Replace with your gateway
+
+ New-NetIPAddress -InterfaceAlias $adapterName -IPAddress $newUniqueIP -PrefixLength $prefixLength -DefaultGateway $defaultGateway
+ ```
+
+3. Verify the new IP address is configured correctly:
+
+ ```powershell
+ Get-NetIPAddress -InterfaceAlias $adapterName -AddressFamily IPv4 | Where-Object { $_.PrefixOrigin -eq "Manual" }
+ Get-NetIPConfiguration -InterfaceAlias $adapterName
+ ```
+
+4. Verify network connectivity and uniqueness:
+
+ ```powershell
+ # Test connectivity to default gateway
+ Test-NetConnection -ComputerName "10.0.1.1" # Replace with your gateway
+
+ # Test connectivity to an existing cluster node
+ Test-NetConnection -ComputerName ""
+
+ # Verify the IP is not in use by any other device
+ # Run this from an existing cluster node:
+ Test-NetConnection -ComputerName "10.0.1.55" -InformationLevel Quiet
+ # Should return True only when pinging the new node
+ ```
+
+5. Retry the Add-Server operation.
+
+> **Important**: Changing the management IP address will temporarily disconnect the node from the network. Ensure you have console access or remote management (e.g., iLO, iDRAC) before making this change.
+
+---
+
+## Additional Information
+
+### Why Duplicate IPs Cause Problems
+
+Duplicate IP addresses cause several critical issues:
+1. **Network conflicts**: Both nodes try to respond to the same IP
+2. **Unreliable communication**: Network traffic may be routed to the wrong node
+3. **Cluster instability**: The cluster cannot reliably communicate with nodes
+4. **Add-Server failure**: The operation will fail to complete
+
+### Common Causes of Duplicate IPs
+| Cause | Description | Prevention |
+|-------|-------------|-----------|
+| Copy-paste error | Copied IP from existing node documentation | Double-check IP before configuration |
+| Outdated documentation | Using old IP assignments | Keep IP inventory up to date |
+| Manual typo | Incorrect IP entered during configuration | Verify IP address before applying |
+
+### Best Practices for IP Address Management
+
+1. **Maintain an IP address inventory**:
+ - Create a spreadsheet tracking all cluster node IPs
+ - Document which IPs are reserved for infrastructure pool
+ - Update the inventory when adding or removing nodes
+
+2. **Use consistent IP addressing schemes**:
+ - Example: First node = .10, second node = .11, etc.
+ - Reserve .100-.200 for infrastructure pool
+ - Keep management IPs in a contiguous range
+
+3. **Verify before deployment**:
+ ```powershell
+ # Before configuring a new node, verify IP is available
+ Test-NetConnection -ComputerName "10.0.1.55" -InformationLevel Quiet
+ # Should return False if IP is available
+ ```
+
+4. **Use DHCP reservations or static IPs consistently**:
+ - For static IP deployments, configure IPs manually before Add-Server
+ - For DHCP deployments, create DHCP reservations for each node
+
+### Example IP Addressing Scheme
+
+For a 4-node cluster with infrastructure pool:
+
+| Node | Management IP | Infrastructure Pool | Gateway |
+|------|--------------|-------------------|---------|
+| NODE1 | 10.0.1.11 | 10.0.1.100-10.0.1.200 | 10.0.1.1 |
+| NODE2 | 10.0.1.12 | 10.0.1.100-10.0.1.200 | 10.0.1.1 |
+| NODE3 | 10.0.1.13 | 10.0.1.100-10.0.1.200 | 10.0.1.1 |
+| NODE4 | 10.0.1.14 | 10.0.1.100-10.0.1.200 | 10.0.1.1 |
+
+### Related Documentation
+
+- [Azure Local Network Requirements](https://learn.microsoft.com/azure-stack/hci/concepts/host-network-requirements)
+- [Add servers to an Azure Local cluster](https://learn.microsoft.com/azure-stack/hci/manage/add-server)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-NewNode-First-Adapter.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-NewNode-First-Adapter.md
new file mode 100644
index 0000000..268e244
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-NewNode-First-Adapter.md
@@ -0,0 +1,244 @@
+# AzStackHci_Network_Test_New_Node_First_Adapter_Validity
+
+
+
+ | Name |
+ AzStackHci_Network_Test_New_Node_First_Adapter_Validity |
+
+
+ | Severity |
+ Critical: This validator will block operations until remediated. |
+
+
+ | Applicable Scenarios |
+ Add-Server |
+
+
+
+## Overview
+
+This validator checks that the first network adapter specified in the management intent on the new node has the management IP address configured. The validator checks both the physical adapter and the virtual management adapter (if using a VMSwitch).
+
+## Requirements
+
+The new node must meet the following requirements:
+1. The first physical adapter defined in the management intent must exist on the system, OR
+2. If using a VMSwitch, the virtual management adapter `vManagement(IntentName)` must exist
+3. The adapter (physical or virtual) must have the management IP address configured
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information about the adapter and IP configuration. The `TargetResourceName` shows the adapter name being checked.
+
+```json
+{
+ "Name": "AzStackHci_Network_Test_New_Node_First_Adapter_Validity",
+ "DisplayName": "Test New Node Configuration First Network Adapter has Management IP",
+ "Title": "Test New Node Configuration First Network Adapter has Management IP",
+ "Status": 1,
+ "Severity": 2,
+ "Description": "Checking New Node first adapter has management IP",
+ "Remediation": "https://learn.microsoft.com/en-us/azure-stack/hci/deploy/deployment-tool-checklist",
+ "TargetResourceID": "10.0.1.50",
+ "TargetResourceName": "Ethernet",
+ "TargetResourceType": "Network Adapter",
+ "Timestamp": "\\/Date(timestamp)\\/",
+ "AdditionalData": {
+ "Source": "NewNodeAdapter",
+ "Resource": "NewNodeAdapterIP",
+ "Detail": "Either the adapter (physical or virtual) ('Ethernet' or 'vManagement(ManagementIntent)') was not found or the mgmt IP ('10.0.1.50') on the adapter was wrong",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: Adapter Not Found or Management IP Not on Adapter
+**Root Cause:** The validator could not find the expected network adapter with the management IP address. This can occur if:
+- The physical adapter name doesn't match the management intent configuration
+- The virtual management adapter doesn't exist (for VMSwitch configurations)
+- The management IP is configured on a different adapter
+- The adapter exists but doesn't have any IP address configured
+
+#### Remediation Steps
+
+##### Step 1: Verify Adapter Configuration
+
+First, identify which adapters exist on the node and their IP configurations:
+
+```powershell
+# Run on the new node being added
+# List all network adapters
+Get-NetAdapter | Select-Object Name, Status, InterfaceDescription
+
+# List all IP addresses
+Get-NetIPAddress -AddressFamily IPv4 | Select-Object InterfaceAlias, IPAddress, PrefixOrigin
+
+# Get IP configuration with gateway
+Get-NetIPConfiguration | Where-Object {
+ $null -ne $_.IPv4DefaultGateway -and
+ $_.NetAdapter.Status -eq "Up"
+} | Select-Object InterfaceAlias, IPv4Address, IPv4DefaultGateway
+```
+
+##### Step 2: Check Management Intent Configuration
+
+Verify which adapter is defined in the management intent:
+
+```powershell
+# Run on the existing node in the cluster
+$mgmtIntent = Get-NetIntent | Where-Object { $_.IsManagementIntentSet -eq "true" }
+if ($mgmtIntent) {
+ Write-Host "Management Intent Name: $($mgmtIntent.IntentName)"
+ Write-Host "Management Intent Adapters: $($mgmtIntent.NetAdapterNamesAsList -join ', ')"
+} else {
+ Write-Host "No management intent found!"
+}
+```
+
+##### Step 3: Remediate Based on Configuration Type
+
+Choose the appropriate remediation based on whether you're using a VMSwitch or physical adapter configuration.
+
+###### Scenario A: Using Physical Adapter (No VMSwitch)
+
+If you're using a physical adapter for management:
+
+1. Verify the first adapter in the management intent exists in the new node:
+
+ ```powershell
+ $firstAdapterName = $mgmtIntent.NetAdapterNamesAsList[0]
+ Get-NetAdapter -Name $firstAdapterName -ErrorAction SilentlyContinue
+ ```
+
+2. Configure the management IP on the correct adapter:
+
+ ```powershell
+ $adapterName = $firstAdapterName # From above
+ $mgmtIP = "10.0.1.50" # Replace with your management IP
+ $prefixLength = 24 # Replace with your subnet prefix length
+ $defaultGateway = "10.0.1.1" # Replace with your gateway
+
+ # Remove any existing IPs (if needed)
+ Get-NetIPAddress -InterfaceAlias $adapterName -AddressFamily IPv4 -ErrorAction SilentlyContinue |
+ Remove-NetIPAddress -Confirm:$false
+
+ # Configure the management IP
+ New-NetIPAddress -InterfaceAlias $adapterName -IPAddress $mgmtIP -PrefixLength $prefixLength -DefaultGateway $defaultGateway
+
+ # Configure DNS
+ Set-DnsClientServerAddress -InterfaceAlias $adapterName -ServerAddresses "10.0.1.1" # Replace with your DNS
+ ```
+
+3. Verify the configuration:
+
+ ```powershell
+ Get-NetIPAddress -InterfaceAlias $adapterName -AddressFamily IPv4
+ Get-NetIPConfiguration -InterfaceAlias $adapterName
+ Test-NetConnection -ComputerName "10.0.1.1" # Test gateway
+ ```
+
+###### Scenario B: Using VMSwitch with Virtual Management Adapter
+
+If you're using a VMSwitch:
+
+1. Check if the virtual management adapter exists on the new node:
+
+ ```powershell
+ $intentName = $mgmtIntent.IntentName
+ $vNicName = "vManagement($intentName)"
+
+ $vNic = Get-VMNetworkAdapter -ManagementOS -Name $vNicName -ErrorAction SilentlyContinue
+ if ($vNic) {
+ Write-Host "Virtual adapter '$vNicName' exists"
+ } else {
+ Write-Host "ERROR: Virtual adapter '$vNicName' NOT found!"
+ }
+ ```
+
+2. If the virtual adapter doesn't exist, you may need to recreate the VMSwitch or add the virtual adapter:
+
+3. Configure the management IP on the virtual adapter:
+
+ ```powershell
+ $adapterName = $vNicName
+ $mgmtIP = "10.0.1.50" # Replace with your management IP
+ $prefixLength = 24 # Replace with your subnet prefix length
+ $defaultGateway = "10.0.1.1" # Replace with your gateway
+
+ # Remove any existing IPs (if needed)
+ Get-NetIPAddress -InterfaceAlias $adapterName -AddressFamily IPv4 -ErrorAction SilentlyContinue |
+ Remove-NetIPAddress -Confirm:$false
+
+ # Configure the management IP
+ New-NetIPAddress -InterfaceAlias $adapterName -IPAddress $mgmtIP -PrefixLength $prefixLength -DefaultGateway $defaultGateway
+
+ # Configure DNS
+ Set-DnsClientServerAddress -InterfaceAlias $adapterName -ServerAddresses "10.0.1.1" # Replace with your DNS
+ ```
+
+4. Verify the configuration:
+
+ ```powershell
+ Get-NetIPAddress -InterfaceAlias $adapterName -AddressFamily IPv4
+ Get-NetIPConfiguration -InterfaceAlias $adapterName
+ Test-NetConnection -ComputerName "10.0.1.1" # Test gateway
+ ```
+
+##### Step 4: Retry Add-Server
+
+After configuring the correct adapter with the management IP, retry the Add-Server operation.
+
+---
+
+## Additional Information
+
+### Understanding Management Adapter Configuration
+
+Azure Local supports two configuration models:
+
+1. **Physical Adapter**: Management IP is directly on the physical network adapter
+2. **VMSwitch with Virtual Adapter**: Management IP is on a virtual adapter attached to a VMSwitch
+
+The validator checks both possibilities:
+- First, it checks the physical adapter specified in the management intent
+- If not found or IP doesn't match, it checks for `vManagement(IntentName)` virtual adapter
+
+### Common Causes of Failure
+
+| Cause | Description | Resolution |
+|-------|-------------|-----------|
+| Adapter name mismatch | Physical adapter has different name than intent | Rename adapter or update intent |
+| VMSwitch not exists | The VMSwitch does not exist on the new node, or created with a different adapter set | Try to create the VMSwitch with right adapter list |
+| IP on wrong adapter | Management IP on different adapter | Move IP to correct adapter |
+| No IP configured | Adapter exists but has no IP | Configure management IP |
+| Adapter disabled | Adapter exists but is disabled | Enable the adapter |
+
+### Checking Adapter Status
+
+Use these commands to troubleshoot:
+
+```powershell
+# Get all adapters and their status
+Get-NetAdapter | Format-Table Name, Status, LinkSpeed, InterfaceDescription -AutoSize
+
+# Get all IP configurations
+Get-NetIPConfiguration | Format-Table InterfaceAlias, IPv4Address, IPv4DefaultGateway -AutoSize
+
+# Check for VMNetworkAdapters (Management OS)
+Get-VMNetworkAdapter -ManagementOS | Format-Table Name, SwitchName, IPAddresses -AutoSize
+
+# Check VMSwitch info
+Get-VMSwitch
+
+```
+
+### Physical vs. Virtual Adapter Decision
+### Related Documentation
+
+- [Azure Local Network Requirements](https://learn.microsoft.com/azure-stack/hci/concepts/host-network-requirements)
+- [Add servers to an Azure Local cluster](https://learn.microsoft.com/azure-stack/hci/manage/add-server)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-NewNode-IP-Conflict.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-NewNode-IP-Conflict.md
new file mode 100644
index 0000000..6415891
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-NewNode-IP-Conflict.md
@@ -0,0 +1,201 @@
+# AzStackHci_Network_Test_New_Node_Validity_IP_Conflict
+
+
+
+ | Name |
+ AzStackHci_Network_Test_New_Node_Validity_IP_Conflict |
+
+
+ | Severity |
+ Critical: This validator will block operations until remediated. |
+
+
+ | Applicable Scenarios |
+ Add-Server |
+
+
+
+## Overview
+
+This validator checks that the management IP address of the new node being added to the cluster is not already in use by another node. This differs from the duplicate IP check in that it specifically verifies the IP addresses saved in the ECE configuration store and in the parameter passed into Add-Server cmdlet call.
+
+## Requirements
+
+The new node must meet the following requirement:
+1. The node's management IP address must not be in use by any other node in the cluster
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information about which node is using the conflicting IP. The `TargetResourceID` shows the new node's management IP address.
+
+```json
+{
+ "Name": "AzStackHci_Network_Test_New_Node_Validity_IP_Conflict",
+ "DisplayName": "Test New Node Configuration Conflicting IP",
+ "Title": "Test New Node Configuration Conflicting IP",
+ "Status": 1,
+ "Severity": 2,
+ "Description": "Checking New Node IP is not on another node",
+ "Remediation": "https://aka.ms/hci-envch",
+ "TargetResourceID": "10.0.1.50",
+ "TargetResourceName": "IPAddress",
+ "TargetResourceType": "IPAddress",
+ "Timestamp": "\\/Date(timestamp)\\/",
+ "AdditionalData": {
+ "Source": "NodeAndManagementIPMapping",
+ "Resource": "NodeNameAndManagementIP",
+ "Detail": "Mgmt IP '10.0.1.50' was found on another Node. Found on 'NODE1'",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: Management IP Found on Another Node
+
+**Error Message:**
+```text
+Mgmt IP '10.0.1.50' was found on another Node. Found on 'NODE1'
+```
+
+**Root Cause:** The management IP address configured on the new node is already in use by another existing node in the cluster (in this example, NODE1). This creates an IP conflict that prevents the new node from being added.
+
+#### Remediation Steps
+
+##### Option 1: Change the New Node's Management IP (Recommended)
+
+This is the recommended approach as it doesn't impact existing cluster nodes.
+
+1. Identify the conflicting node from the error message (e.g., `NODE1`).
+
+2. Verify the conflict by checking the existing node's IP:
+
+ ```powershell
+ # Run on the existing node mentioned in the error (e.g., NODE1)
+ $computerName = $env:COMPUTERNAME
+ $mgmtIP = (Get-NetIPConfiguration | Where-Object { $null -ne $_.IPv4DefaultGateway -and $_.NetAdapter.Status -eq "Up" }).IPv4Address.IPAddress
+ Write-Host "Node: $computerName, Management IP: $mgmtIP"
+ ```
+
+3. Choose a new unique IP address for the new node that is:
+ - Not used by any existing cluster node
+ - Outside the infrastructure IP pool range
+ - In the same subnet as existing cluster nodes
+ - Not in use by any other device on the network
+
+4. On the new node being added, change the management IP address:
+
+5. Verify the new IP address is configured correctly:
+
+6. Verify network connectivity:
+
+ ```powershell
+ # Test connectivity to default gateway
+ Test-NetConnection -ComputerName "10.0.1.1"
+
+ # Test connectivity to the existing node
+ Test-NetConnection -ComputerName "10.0.1.50" # Should reach NODE1
+
+ # Test from an existing node to the new node
+ Test-NetConnection -ComputerName "10.0.1.55" # Should reach new node
+ ```
+
+7. Retry the Add-Server operation.
+
+> **Important**: Changing the management IP address will temporarily disconnect the node from the network. Ensure you have console access or remote management (e.g., iLO, iDRAC) before making this change.
+
+##### Option 2: Correct the Configuration (If Misconfigured)
+
+If the new node was incorrectly configured with an existing node's IP:
+
+1. Review the deployment configuration or answer file to ensure the correct IP address is specified for the new node.
+
+2. Correct any configuration errors in the deployment parameters.
+
+3. Retry the Add-Server operation with the corrected configuration.
+
+---
+
+## Additional Information
+
+### Understanding IP Conflicts
+
+An IP conflict occurs when two devices on the same network are configured with the same IP address. This can cause:
+- **Network instability**: Packets may be delivered to the wrong device
+- **Connection failures**: Services may become unreachable
+- **Cluster communication issues**: Nodes cannot reliably communicate
+
+### Difference from Duplicate IP Check
+
+While similar, this validator differs from `AzStackHci_Network_Test_New_Node_Validity_Duplicate_IP`:
+- **Duplicate IP check**: Verifies no two nodes have the same IP (general duplicate detection)
+- **IP Conflict check**: Specifically verifies the new node's IP is not on a different (existing) node
+
+Both checks ensure IP uniqueness but from slightly different perspectives.
+
+### Common Causes of IP Conflicts
+| Cause | Description | Prevention |
+|-------|-------------|-----------|
+| Manual configuration error | IP address was manually configured incorrectly | Use IP address management (IPAM) tools |
+| Cloned node | New node was cloned from an existing node | Always reconfigure IP after cloning |
+| DHCP issue | DHCP server assigned duplicate address | Use DHCP reservations for cluster nodes |
+
+### Troubleshooting Tips
+
+1. **List all node IPs**:
+ ```powershell
+ # Run on each existing cluster node to create an inventory
+ Get-ClusterNode | ForEach-Object {
+ $nodeName = $_.Name
+ $session = New-PSSession -ComputerName $nodeName
+ $ip = Invoke-Command -Session $session -ScriptBlock {
+ (Get-NetIPConfiguration | Where-Object {
+ $null -ne $_.IPv4DefaultGateway -and
+ $_.NetAdapter.Status -eq "Up"
+ }).IPv4Address.IPAddress
+ }
+ Remove-PSSession $session
+ [PSCustomObject]@{
+ NodeName = $nodeName
+ ManagementIP = $ip
+ }
+ }
+ ```
+
+2. **Verify IP availability before configuration**:
+ ```powershell
+ # Test if IP is in use (from any existing cluster node)
+ $testIP = "10.0.1.55"
+ $pingResult = Test-NetConnection -ComputerName $testIP -InformationLevel Quiet
+ if ($pingResult) {
+ Write-Host "WARNING: IP $testIP is already in use!"
+ } else {
+ Write-Host "IP $testIP is available"
+ }
+ ```
+
+3. **Check for ARP cache conflicts**:
+ ```powershell
+ # View ARP cache to see which MAC address is associated with the IP
+ Get-NetNeighbor -IPAddress "10.0.1.50" -ErrorAction SilentlyContinue
+ ```
+
+### Example Scenario
+
+**Existing cluster nodes:**
+- NODE1: 10.0.1.10
+- NODE2: 10.0.1.11
+- NODE3: 10.0.1.12
+
+**Adding new node (NODE4):**
+- **Incorrect**: Configured as 10.0.1.10 (conflicts with NODE1) ❌
+- **Correct**: Configured as 10.0.1.13 (unique IP) ✅
+
+### Related Documentation
+
+- [Azure Local Network Requirements](https://learn.microsoft.com/azure-stack/hci/concepts/host-network-requirements)
+- [Add servers to an Azure Local cluster](https://learn.microsoft.com/azure-stack/hci/manage/add-server)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-NewNode-Name-IP-Match.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-NewNode-Name-IP-Match.md
new file mode 100644
index 0000000..ed70ea3
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-NewNode-Name-IP-Match.md
@@ -0,0 +1,200 @@
+# AzStackHci_Network_Test_New_Node_And_IP_Match
+
+
+
+ | Name |
+ AzStackHci_Network_Test_New_Node_And_IP_Match |
+
+
+ | Severity |
+ Critical: This validator will block operations until remediated. |
+
+
+ | Applicable Scenarios |
+ Add-Server |
+
+
+
+## Overview
+
+This validator checks that the new node's hostname and management IP address match the expected configuration provided in the Add-Server operation. The node name retrieved from the system must correspond to the management IP address assigned to that node.
+
+## Requirements
+
+The new node must meet the following requirement:
+1. The node's hostname must match the expected configuration for its management IP address
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information about the node name and IP mismatch. The `TargetResourceID` shows the management IP address.
+
+```json
+{
+ "Name": "AzStackHci_Network_Test_New_Node_And_IP_Match",
+ "DisplayName": "Test New Node Configuration Name and IP Match",
+ "Title": "Test New Node Configuration Name and IP Match",
+ "Status": 1,
+ "Severity": 2,
+ "Description": "Checking New Node Name and IP match",
+ "Remediation": "https://aka.ms/hci-envch",
+ "TargetResourceID": "10.0.1.50",
+ "TargetResourceName": "IPAddress",
+ "TargetResourceType": "IPAddress",
+ "Timestamp": "\\/Date(timestamp)\\/",
+ "AdditionalData": {
+ "Source": "NodeAndManagementIPMapping",
+ "Resource": "NewNodeNameAndManagementIP",
+ "Detail": "New Node Mgmt IP '10.0.1.50' is not on the New Node. Found instead on 'NODE1'",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: Node Name and IP Do Not Match Expected Configuration
+
+**Error Message:**
+```text
+New Node Mgmt IP '10.0.1.50' is not on the New Node. Found instead on 'NODE1'
+```
+
+**Root Cause:** The management IP address being used for the Add-Server operation is associated with a different node than expected. This typically indicates a configuration mismatch where:
+- The wrong IP address was specified in the Add-Server configuration
+- The node hostname doesn't match the expected mapping
+- The node was not properly configured before Add-Server
+
+#### Remediation Steps
+
+##### Verify Node Configuration
+
+1. Check the actual node name and IP configuration on the node being added:
+
+ ```powershell
+ # Run on the node being added
+ $computerName = $env:COMPUTERNAME
+ $mgmtIP = (Get-NetIPConfiguration | Where-Object {
+ $null -ne $_.IPv4DefaultGateway -and
+ $_.NetAdapter.Status -eq "Up"
+ }).IPv4Address.IPAddress
+
+ Write-Host "Actual Node Name: $computerName"
+ Write-Host "Actual Management IP: $mgmtIP"
+ ```
+
+2. Compare with the expected configuration:
+ - Expected Node Name: (from Add-Server configuration)
+ - Expected Management IP: (from error message, e.g., `10.0.1.50`)
+
+##### Option 1: Correct the Add-Server Configuration (If Wrong IP Specified)
+
+If the Add-Server configuration specifies the wrong IP address:
+
+1. Review the Add-Server parameters and retry with the corrected configuration.
+
+##### Option 2: Rename the Node (If Wrong Hostname)
+
+If the node has the wrong hostname:
+
+1. Check if the node needs to be renamed to match the expected configuration:
+
+2. Rename the computer (requires restart):
+
+3. After the node restarts, verify the name change:
+
+ ```powershell
+ $env:COMPUTERNAME
+ ```
+
+4. Retry the Add-Server operation.
+
+> **Warning**: Renaming a computer requires a restart and will temporarily disconnect the node.
+
+##### Option 3: Reconfigure the Node's Management IP (If Wrong IP)
+
+If the node has the wrong management IP configured:
+
+1. Determine the correct IP address for this node from your deployment plan.
+
+2. On the node being added, change the management IP:
+
+3. Retry the Add-Server operation.
+
+---
+
+## Additional Information
+
+### Understanding the Node Name and IP Mapping
+
+During Add-Server operations, Azure Local expects:
+1. **A specific node name** to be added (e.g., NODE4)
+2. **A specific management IP** for that node (e.g., 10.0.1.13)
+3. **The hostname and IP must match** on the actual node
+
+This validator ensures the mapping is correct before proceeding with the add operation.
+
+### Best Practices for Add-Server Preparation
+
+1. **Pre-configure the node**:
+ - Set the correct hostname
+ - Configure the correct management IP
+ - Verify network connectivity
+ - Document the configuration
+
+2. **Create a deployment checklist**:
+ ```plaintext
+ Node Preparation Checklist:
+ ☐ Hostname set to: NODE4
+ ☐ Management IP: 10.0.1.13 /24
+ ☐ Gateway: 10.0.1.1
+ ☐ DNS: 10.0.1.1
+ ☐ Network connectivity tested
+ ☐ Time synchronized with cluster
+ ```
+
+3. **Verify before Add-Server**:
+ ```powershell
+ # Run on the node before adding to cluster
+ $computerName = $env:COMPUTERNAME
+ $mgmtIP = (Get-NetIPConfiguration | Where-Object {
+ $null -ne $_.IPv4DefaultGateway -and
+ $_.NetAdapter.Status -eq "Up"
+ }).IPv4Address.IPAddress
+
+ Write-Host "Node Name: $computerName"
+ Write-Host "Management IP: $mgmtIP"
+ Write-Host ""
+ Write-Host "Expected Node Name: NODE4"
+ Write-Host "Expected Management IP: 10.0.1.13"
+ Write-Host ""
+ if ($computerName -eq "NODE4" -and $mgmtIP -eq "10.0.1.13") {
+ Write-Host "✓ Configuration matches expectations" -ForegroundColor Green
+ } else {
+ Write-Host "✗ Configuration does NOT match expectations" -ForegroundColor Red
+ }
+ ```
+
+### Example Deployment Plan
+
+For adding NODE4 to an existing 3-node cluster:
+
+| Node | Expected Hostname | Expected Management IP | Status |
+|------|------------------|----------------------|--------|
+| NODE1 | NODE1 | 10.0.1.10 | Existing |
+| NODE2 | NODE2 | 10.0.1.11 | Existing |
+| NODE3 | NODE3 | 10.0.1.12 | Existing |
+| NODE4 | NODE4 | 10.0.1.13 | Being Added |
+
+**Before running Add-Server:**
+1. Configure NODE4 with hostname "NODE4"
+2. Configure NODE4 with IP 10.0.1.13
+3. Verify configuration matches expectations
+4. Run Add-Server with NODE4/10.0.1.13 configuration
+
+### Related Documentation
+
+- [Azure Local Network Requirements](https://learn.microsoft.com/azure-stack/hci/concepts/host-network-requirements)
+- [Add servers to an Azure Local cluster](https://learn.microsoft.com/azure-stack/hci/manage/add-server)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-NewNode-Outside-MgmtRange.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-NewNode-Outside-MgmtRange.md
new file mode 100644
index 0000000..3d7cf3f
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-NewNode-Outside-MgmtRange.md
@@ -0,0 +1,164 @@
+# AzStackHci_Network_Test_New_Node_Validity_Outside_Mgmt_Range
+
+
+
+ | Name |
+ AzStackHci_Network_Test_New_Node_Validity_Outside_Mgmt_Range |
+
+
+ | Severity |
+ Critical: This validator will block operations until remediated. |
+
+
+ | Applicable Scenarios |
+ Add-Server |
+
+
+
+## Overview
+
+This validator checks that the management IP address of the new node being added to the cluster does not fall within the infrastructure IP pool range. The infrastructure IP pool is reserved for dynamic IP allocation, and node management IPs must be outside this range to avoid conflicts.
+
+## Requirements
+
+The new node must meet the following requirement:
+1. The node's management IP address must be outside the infrastructure IP pool range (StartingAddress to EndingAddress)
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information about the management IP and IP pool range. The `Source` field identifies the new node name, and the `TargetResourceID` shows the management IP address.
+
+```json
+{
+ "Name": "AzStackHci_Network_Test_New_Node_Validity_Outside_Mgmt_Range",
+ "DisplayName": "Test New Node Configuration Outside Management Range",
+ "Title": "Test New Node Configuration Outside Management Range",
+ "Status": 1,
+ "Severity": 2,
+ "Description": "Checking New Node IP",
+ "Remediation": "https://aka.ms/hci-envch",
+ "TargetResourceID": "10.0.1.150",
+ "TargetResourceName": "IPAddress",
+ "TargetResourceType": "IPAddress",
+ "Timestamp": "\\/Date(timestamp)\\/",
+ "AdditionalData": {
+ "Source": "NODE2",
+ "Resource": "NewNodeManagementIP",
+ "Detail": "New Node Mgmt IP '10.0.1.150' is inside the Start '10.0.1.100' and End '10.0.1.200' of the infra IP Pool! Make sure the IP is not in the range of the Infra IP Pool.",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: New Node Management IP is Inside Infrastructure IP Pool Range
+
+**Error Message:**
+```text
+New Node Mgmt IP '10.0.1.150' is inside the Start '10.0.1.100' and End '10.0.1.200' of the infra IP Pool! Make sure the IP is not in the range of the Infra IP Pool.
+```
+
+**Root Cause:** The new node's management IP address falls within the infrastructure IP pool range. This creates a conflict because the IP address could be dynamically assigned to a cluster resource, resulting in IP address conflicts.
+
+#### Remediation Steps
+
+##### Change the New Node's Management IP Address (Recommended)
+
+Reconfigure the new node to use a management IP address outside the infrastructure IP pool range.
+
+1. Identify the infrastructure IP pool range from the error message (e.g., `10.0.1.100-10.0.1.200`).
+
+2. Choose a new IP address for the new node that is:
+ - Outside the infrastructure IP pool range
+ - In the same subnet as the infrastructure IP pool and existing nodes
+ - Not in use by any other device on the network
+ - Not used by any existing cluster nodes
+
+3. On the new node, change the management IP address:
+
+ ```powershell
+ # Use the management adapter name in your system
+ $adapterName = "myAdapterName"
+
+ # Remove the old IP address
+ $oldIP = "10.0.1.150" # Replace with the current management IP from error
+ Remove-NetIPAddress -InterfaceAlias $adapterName -IPAddress $oldIP -Confirm:$false
+
+ # Add the new IP address (outside the IP pool range)
+ $newIP = "10.0.1.50" # Replace with new IP outside the pool range
+ $prefixLength = 24 # Replace with your subnet prefix length
+ $defaultGateway = "10.0.1.1" # Replace with your gateway
+
+ New-NetIPAddress -InterfaceAlias $adapterName -IPAddress $newIP -PrefixLength $prefixLength -DefaultGateway $defaultGateway
+ ```
+
+4. Verify the new IP address is configured correctly:
+
+ ```powershell
+ Get-NetIPAddress -InterfaceAlias $adapterName -AddressFamily IPv4 | Where-Object { $_.PrefixOrigin -eq "Manual" }
+ Get-NetIPConfiguration -InterfaceAlias $adapterName
+ ```
+
+5. Verify network connectivity:
+
+ ```powershell
+ # Test connectivity to default gateway
+ Test-NetConnection -ComputerName "10.0.1.1"
+
+ # Test connectivity to an existing cluster node
+ Test-NetConnection -ComputerName ""
+
+ # Test DNS resolution
+ Resolve-DnsName "microsoft.com"
+ ```
+
+6. Retry the Add-Server operation.
+
+> **Important**: Changing the management IP address will temporarily disconnect the node from the network. Ensure you have console access or remote management (e.g., iLO, iDRAC) before making this change.
+
+---
+
+## Additional Information
+
+### Understanding IP Pool Ranges
+
+The infrastructure IP pool is a range of IP addresses reserved for:
+- Cluster IP address
+- Virtual IP addresses for cluster resources
+- Dynamic IP allocation for services
+
+Example IP pool: `10.0.1.100` to `10.0.1.200`
+- This reserves 101 IP addresses (100-200 inclusive)
+- Node management IPs must be outside this range
+
+### Choosing a New Management IP
+
+When selecting a new management IP for the node:
+
+1. **Check the IP pool boundaries**:
+ - If pool is `10.0.1.100-10.0.1.200`, use IPs below 100 or above 200
+ - Examples: `10.0.1.50`, `10.0.1.99`, `10.0.1.201`, `10.0.1.250`
+
+2. **Verify IP is not in use**:
+ ```powershell
+ # Test if IP is in use (from any existing cluster node)
+ Test-NetConnection -ComputerName "10.0.1.50" -InformationLevel Quiet
+ # Should return False if IP is available
+ ```
+
+3. **Ensure same subnet**:
+ - New IP must be in the same subnet as existing nodes
+ - Use the same prefix length (subnet mask)
+
+4. **Document the IP assignment**:
+ - Keep track of which IPs are assigned to nodes
+ - Maintain an IP address management spreadsheet or IPAM system
+
+### Related Documentation
+
+- [Azure Local Network Requirements](https://learn.microsoft.com/azure-stack/hci/concepts/host-network-requirements)
+- [Add servers to an Azure Local cluster](https://learn.microsoft.com/azure-stack/hci/manage/add-server)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-StorageAdapterReadiness.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-StorageAdapterReadiness.md
index 9afc02a..cc0720b 100644
--- a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-StorageAdapterReadiness.md
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-StorageAdapterReadiness.md
@@ -112,5 +112,5 @@ The Storage Adapter should support VLANID, but not have a value configured. Netw
```
### Failure: Adapter does not support VLANID. The Storage adapter should support VLANID.
-
-Storage Adapters must support VLANID. Please see [Select a network adapter](https://learn.microsoft.com/en-us/azure/azure-stack/hci/deploy/azure-stack-hci-network-adapter) for more information on supported network adapters.
\ No newline at end of file
+
+Storage Adapters must support VLANID.
\ No newline at end of file
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-StorageConnectivityType.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-StorageConnectivityType.md
new file mode 100644
index 0000000..b9b8531
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-StorageConnectivityType.md
@@ -0,0 +1,195 @@
+# AzStackHci_Network_Test_StorageConnectivityType
+
+
+
+ | Name |
+ AzStackHci_Network_Test_StorageConnectivityType |
+
+
+ | Severity |
+ Critical: This validator will block operations until remediated. |
+
+
+ | Applicable Scenarios |
+ Deployment |
+
+
+
+## Overview
+
+This validator checks that Rack Aware clusters do not use switchless storage connectivity. Rack Aware deployments require switched storage connectivity to ensure proper network routing and communication across multiple racks.
+
+## Requirements
+
+For Rack Aware clusters:
+1. Storage connectivity must NOT be switchless (must use network switches)
+
+For Standard or Stretch clusters:
+- This validation is skipped (both switched and switchless are supported)
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information about the storage connectivity type.
+
+```json
+{
+ "Name": "AzStackHci_Network_Test_StorageConnectivityType",
+ "DisplayName": "Test storage connectivity type for Rack Aware cluster",
+ "Title": "Test storage connectivity type for Rack Aware cluster",
+ "Status": 1,
+ "Severity": 2,
+ "Description": "Test that switchless storage connectivity is NOT used for Rack Aware cluster.",
+ "Remediation": "",
+ "TargetResourceID": "StorageConnectivityType",
+ "TargetResourceName": "StorageConnectivityType",
+ "TargetResourceType": "StorageConnectivityType",
+ "Timestamp": "",
+ "AdditionalData": {
+ "Source": "",
+ "Resource": "StorageConnectivityType",
+ "Detail": "Switchless storage connectivity is used for Rack Aware cluster.",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: Switchless Storage Used for Rack Aware Cluster
+
+**Error Message:**
+```text
+Switchless storage connectivity is used for Rack Aware cluster.
+```
+
+**Root Cause:** The deployment is configured to use switchless storage connectivity, but this is not supported for Rack Aware clusters. Rack Aware clusters span multiple racks and require network switches to route storage traffic between racks.
+
+#### Remediation Steps
+
+##### Change to Switched Storage Connectivity
+
+1. Review your deployment configuration and locate the storage connectivity setting.
+
+2. Change the storage connectivity type from switchless to switched:
+
+ **In deployment configuration file:**
+ - Look for a parameter like `SwitchlessDeploy`, `StorageConnectivityType`, or similar
+ - Change from `switchless` to `switched` or from `true` to `false`
+
+ **Example configuration change:**
+ ```json
+ {
+ "clusterPattern": "RackAware",
+ "storageConnectivity": "switched"
+ }
+ ```
+
+3. Ensure your network infrastructure supports switched storage:
+ - Network switches are installed and configured
+ - Storage adapters are connected to network switches (not directly to each other)
+ - VLANs are configured for storage traffic (if required)
+ - Switches support required features (Jumbo Frames, RDMA/RoCE if used)
+
+4. Retry the deployment with the updated configuration.
+
+---
+
+## Additional Information
+
+### Understanding Storage Connectivity Types
+
+**Switched Storage:**
+- Storage adapters connect through network switches
+- Required for Rack Aware clusters
+- Supports cross-rack communication
+- More flexible for scaling and reconfiguration
+- Requires network switch infrastructure
+
+**Switchless Storage:**
+- Storage adapters connect directly between nodes
+- Only supported for Standard and Stretch clusters
+- Not supported for Rack Aware clusters
+- Simpler cabling but less flexible
+- Limited to 2-3 node clusters typically
+
+### Why Switchless is Not Supported for Rack Aware
+
+Rack Aware clusters cannot use switchless storage because:
+1. **Cross-rack connectivity**: Nodes in different racks cannot directly connect without switches
+2. **Scalability**: Direct connections don't scale beyond a few nodes
+3. **Fault domains**: Rack-level fault domains require switch-based routing
+4. **Network topology**: Physical rack separation requires switch infrastructure
+
+### Storage Connectivity Requirements by Cluster Type
+
+| Cluster Type | Switched | Switchless | Notes |
+|-------------|----------|------------|-------|
+| **Standard** | ✓ Supported | ✓ Supported | Both options available |
+| **Stretch** | ✓ Supported | ✓ Supported | Both options available |
+| **Rack Aware** | ✓ Required | ✗ Not Supported | Must use switched |
+
+### Network Switch Requirements for Switched Storage
+
+When using switched storage connectivity, ensure your switches:
+
+1. **Support required speeds**:
+ - Minimum: 10Gbps
+ - Recommended: 25Gbps or higher
+ - Match your adapter capabilities
+
+2. **Support required features**:
+ - Jumbo Frames (MTU 9000+)
+ - RDMA/RoCE (if using RDMA)
+ - Flow Control (if using RoCE)
+ - QoS/Priority Flow Control (for lossless Ethernet)
+
+3. **Proper VLAN configuration**:
+ - Storage traffic on dedicated VLAN (recommended)
+ - Consistent VLAN configuration across all switches
+ - Proper inter-switch connectivity
+
+4. **Inter-rack connectivity**:
+ - Switches in different racks must be interconnected
+ - Sufficient bandwidth between racks
+ - Redundant paths for fault tolerance
+
+### Verifying Switched Storage Configuration
+
+After configuring switched storage:
+
+```powershell
+# Verify network adapter connectivity
+Get-NetAdapter | Where-Object { $_.Status -eq "Up" } |
+ Select-Object Name, Status, LinkSpeed, InterfaceDescription |
+ Format-Table -AutoSize
+
+# Check if adapters are connected to switches (not directly connected)
+# LinkSpeed should show switch port speed, not direct connection speed
+# InterfaceDescription should show adapter model
+```
+
+### Converting from Switchless to Switched
+
+If you started planning for switchless but need switched:
+
+1. **Physical changes required**:
+ - Install network switches if not present
+ - Recable storage adapters to connect through switches
+ - Configure switch ports appropriately
+
+2. **Configuration changes required**:
+ - Update deployment configuration to use switched mode
+ - Configure VLANs on switches (if using)
+ - Update network intent configuration
+
+3. **Verify connectivity**:
+ - Test network connectivity through switches
+ - Verify RDMA functionality (if using)
+ - Test storage performance
+
+### Related Documentation
+
+- [Network reference patterns](https://learn.microsoft.com/azure-stack/hci/plan/network-patterns-overview)
diff --git a/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-StorageVlan-2Node-Switchless.md b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-StorageVlan-2Node-Switchless.md
new file mode 100644
index 0000000..5a46ce2
--- /dev/null
+++ b/TSG/EnvironmentValidator/Networking/Troubleshoot-Network-Test-StorageVlan-2Node-Switchless.md
@@ -0,0 +1,268 @@
+# AzStackHci_Network_Test_Network_StorageVlanFor2NodeSwitchLess
+
+
+
+ | Name |
+ AzStackHci_Network_Test_Network_StorageVlanFor2NodeSwitchLess |
+
+
+ | Severity |
+ Critical: This validator will block operations until remediated. |
+
+
+ | Applicable Scenarios |
+ Deployment |
+
+
+
+## Overview
+
+This validator checks that 2-node switchless deployments provide exactly one storage VLAN ID for each storage adapter. Switchless deployments require VLAN configuration to properly isolate storage traffic between the two nodes.
+
+## Requirements
+
+For 2-node switchless deployments:
+1. The deployment configuration must include a storageNetworks section with VLAN IDs
+2. The number of VLAN IDs provided must equal the number of storage adapters
+
+For other deployment types (switched or non-2-node):
+- This validation is skipped (not applicable)
+
+## Troubleshooting Steps
+
+### Review Environment Validator Output
+
+Review the Environment Validator output JSON. Check the `AdditionalData.Detail` field for information about storage VLAN configuration.
+
+```json
+{
+ "Name": "AzStackHci_Network_Test_Network_StorageVlanFor2NodeSwitchLess",
+ "DisplayName": "Test storage VLANID requirement for 2-node switchless deployment",
+ "Title": "Test storage VLANID requirement for 2-node switchless deployment",
+ "Status": 1,
+ "Severity": 2,
+ "Description": "Check user provides one storage VLANID for each storage adapter provided on 2-node switchless deployment",
+ "Remediation": "Please provide valid storageNetworks and storage VLANID information in your deployment configuration file: Make sure you provide one storage VLANID for each storage adapter provided on 2-node switchless deployment.",
+ "TargetResourceID": "StorageVlanIdFor2NodeSwitchLess",
+ "TargetResourceName": "StorageVlanIdFor2NodeSwitchLess",
+ "TargetResourceType": "StorageVlanIdFor2NodeSwitchLess",
+ "Timestamp": "",
+ "AdditionalData": {
+ "Source": "",
+ "Resource": "StorageVlanIdFor2NodeSwitchLess",
+ "Detail": "No storageNetworks section or valid storage VLANID info provided in the configuration.",
+ "Status": "FAILURE",
+ "TimeStamp": ""
+ }
+}
+```
+
+---
+
+### Failure: No Storage VLAN IDs Provided
+
+**Error Message:**
+```text
+No storageNetworks section or valid storage VLANID info provided in the configuration.
+```
+
+**Root Cause:** The deployment configuration is missing the storageNetworks section or does not contain valid storage VLAN ID information. For 2-node switchless deployments, VLAN IDs are required to properly tag and isolate storage traffic.
+
+#### Remediation Steps
+
+##### Add Storage VLAN Configuration
+
+1. Identify how many storage adapters are in your storage intent:
+
+ - Check your deployment configuration for the storage intent definition
+ - Count the number of adapters defined for storage
+
+2. Update your deployment configuration file to include storage VLAN IDs:
+
+ **Example configuration (JSON format):**
+ ```json
+ {
+ "storageNetworks": [
+ {
+ "name": "Storage1",
+ "networkAdapterName": "Ethernet 2",
+ "vlanId": 711
+ },
+ {
+ "name": "Storage2",
+ "networkAdapterName": "Ethernet 3",
+ "vlanId": 712
+ }
+ ],
+ "intents": [
+ {
+ "name": "StorageIntent",
+ "trafficType": ["Storage"],
+ "adapter": ["Ethernet 2", "Ethernet 3"]
+ }
+ ]
+ }
+ ```
+
+3. Ensure the number of VLAN IDs matches the number of storage adapters:
+ - 2 storage adapters → 2 VLAN IDs required
+ - 4 storage adapters → 4 VLAN IDs required
+
+4. Use unique VLAN IDs for each storage adapter:
+ - Do not reuse the same VLAN ID for multiple adapters
+ - Use VLANs that are configured on your network infrastructure (if applicable)
+ - Recommended range: 700-799 for storage traffic
+
+5. Retry the deployment with the updated configuration.
+
+---
+
+### Failure: VLAN Count Does Not Match Adapter Count
+
+**Error Message:**
+```text
+Found [ 1 ] storage VLANID in the configuration: 711
+```
+(When there are 2 storage adapters but only 1 VLAN ID provided)
+
+**Root Cause:** The number of storage VLAN IDs provided in the configuration does not match the number of storage adapters. Each storage adapter in a 2-node switchless deployment requires its own VLAN ID.
+
+#### Remediation Steps
+
+1. Check how many storage adapters are defined in your storage intent:
+
+ ```powershell
+ # Check your configuration file or deployment parameters
+ # Count the storage adapters
+ ```
+
+2. Update the storageNetworks section to provide one VLAN ID per adapter:
+
+ **Example: 2 adapters require 2 VLAN IDs**
+ ```json
+ {
+ "storageNetworks": [
+ {
+ "name": "Storage1",
+ "networkAdapterName": "Ethernet 2",
+ "vlanId": 711
+ },
+ {
+ "name": "Storage2",
+ "networkAdapterName": "Ethernet 3",
+ "vlanId": 712 // Added second VLAN
+ }
+ ]
+ }
+ ```
+
+3. Ensure VLAN IDs are unique:
+ - Each adapter must have a different VLAN ID
+ - Do not use the same VLAN ID for multiple adapters in switchless deployments
+
+4. Retry the deployment with the updated configuration.
+
+---
+
+## Additional Information
+
+### Why VLANs are Required for 2-Node Switchless
+
+In 2-node switchless deployments:
+- Storage adapters connect directly between the two nodes (no switch)
+- VLANs are used to create logical network separation
+- Each adapter pair (node1-adapter1 ↔ node2-adapter1) uses a unique VLAN
+- This prevents network loops and ensures proper traffic isolation
+
+### VLAN Configuration Example
+
+For a 2-node switchless cluster with 2 storage adapters:
+
+| Node | Adapter | VLAN ID | Connects To |
+|------|---------|---------|-------------|
+| NODE1 | Ethernet 2 | 711 | NODE2 Ethernet 2 (VLAN 711) |
+| NODE1 | Ethernet 3 | 712 | NODE2 Ethernet 3 (VLAN 712) |
+| NODE2 | Ethernet 2 | 711 | NODE1 Ethernet 2 (VLAN 711) |
+| NODE2 | Ethernet 3 | 712 | NODE1 Ethernet 3 (VLAN 712) |
+
+### Recommended VLAN ID Ranges
+
+| Purpose | Recommended Range | Example |
+|---------|------------------|---------|
+| Storage (switchless) | 700-799 | 711, 712, 713, 714 |
+| Management | 1-99 | 1, 10, 50 |
+| Compute/VM | 100-699 | 100, 200, 300 |
+
+### Storage Network Configuration File Example
+
+Complete example for 2-node switchless with 4 storage adapters:
+
+```json
+{
+ "clusterPattern": "Standard",
+ "switchlessDeploy": true,
+ "storageNetworks": [
+ {
+ "name": "Storage1",
+ "networkAdapterName": "Ethernet 2",
+ "vlanId": 711
+ },
+ {
+ "name": "Storage2",
+ "networkAdapterName": "Ethernet 3",
+ "vlanId": 712
+ },
+ {
+ "name": "Storage3",
+ "networkAdapterName": "Ethernet 4",
+ "vlanId": 713
+ },
+ {
+ "name": "Storage4",
+ "networkAdapterName": "Ethernet 5",
+ "vlanId": 714
+ }
+ ],
+ "intents": [
+ {
+ "name": "ManagementIntent",
+ "trafficType": ["Management"],
+ "adapter": ["Ethernet", "Ethernet 1"]
+ },
+ {
+ "name": "StorageIntent",
+ "trafficType": ["Storage"],
+ "adapter": ["Ethernet 2", "Ethernet 3", "Ethernet 4", "Ethernet 5"]
+ }
+ ]
+}
+```
+
+### When This Validation Applies
+
+This validator only runs when **ALL** of these conditions are met:
+1. **Node count** = 2 (exactly 2 nodes)
+2. **Switchless** = true (direct-connect storage)
+3. **Scenario** = Deployment
+
+For other scenarios, this validation is skipped.
+
+### Verifying VLAN Configuration
+
+After deployment, verify VLANs are configured:
+
+```powershell
+# Check VLAN IDs on storage adapters
+Get-NetAdapter | Where-Object { $_.Status -eq "Up" } | ForEach-Object {
+ $vlan = Get-NetAdapterAdvancedProperty -Name $_.Name -RegistryKeyword "VlanID" -ErrorAction SilentlyContinue
+ [PSCustomObject]@{
+ Adapter = $_.Name
+ VLANID = if ($vlan) { $vlan.DisplayValue } else { "None" }
+ Status = $_.Status
+ }
+} | Format-Table -AutoSize
+```
+
+### Related Documentation
+
+- [Network reference patterns](https://learn.microsoft.com/azure-stack/hci/plan/network-patterns-overview)