Skip to content

Commit

Permalink
Merge pull request #4893 from MicrosoftDocs/main
Browse files Browse the repository at this point in the history
updates
  • Loading branch information
jswymer authored Nov 13, 2023
2 parents d015236 + 2955bfc commit 2f346f7
Show file tree
Hide file tree
Showing 19 changed files with 340 additions and 98 deletions.
1 change: 1 addition & 0 deletions dev-itpro/TOC.md
Original file line number Diff line number Diff line change
Expand Up @@ -368,6 +368,7 @@
#### Prepare
##### [Upgrade to Business Central v14](upgrade/Upgrade-Considerations.md#online)
##### [Plan cloud migration](administration/cloud-migration-plan-prepare.md)
##### [Estimate data size in online tenant](administration/cloud-migration-estimate-compressed-data-size.md)
##### [Prerequisites for cloud migration](administration/cloud-migration-prerequisites.md)
##### [Align SQL table definitions](administration/migration-align-table-definitions.md)
##### [Clean data for cloud migration](administration/migration-clean-data.md)
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
---
title: Estimating the data size in your Business Central online tenant
description: This article outlines how to estimate the data size in your Business Central online tenant
author: kennienp
ms.author: kepontop
ms.reviewer: jswymer
ms.service: dynamics365-business-central
ms.topic: conceptual
ms.date: 11/06/2023
ms.custom: bap-template
---

# Estimating the data size in your Business Central online tenant

In the online version of [!INCLUDE[prod_short](../developer/includes/prod_short.md)], data is compressed using the SQL Server data compression feature. As a consequence, the data size in your on-premises database might not match the data size when migrated to the [!INCLUDE[prod_short](../developer/includes/prod_short.md)] service.

Currently, all tables in the online version of [!INCLUDE[prod_short](../developer/includes/prod_short.md)] are compressed with **CompressionType** set to **Page**.

[!INCLUDE[evaluate_table_compression](../includes/include_evaluate_table_compression.md)]

If you want to estimate the compressed size of all or some tables in your database, you can create (and possibly modify) the following stored procedure:

```SQL
CREATE PROCEDURE estimate_page_compressed_table_sizes
AS
SET NOCOUNT ON
BEGIN
DECLARE @table_name sysname;

CREATE TABLE #compressed_table_report (
table_name sysname,
schema_name nvarchar(max),
index_id int,
partition_number int,
size_with_current_compression_setting bigint,
size_with_requested_compression_setting bigint,
sample_size_with_current_compression_setting bigint,
sample_size_with_requested_compression_setting bigint
);

DECLARE tables_cur cursor for
SELECT name
FROM sys.tables
-- adjust this part if you want to restrict the tables in the calculation
-- WHERE table_name in ('table name 1', 'table name 2', 'table name 3')
;

OPEN tables_cur;

FETCH NEXT FROM tables_cur INTO @table_name
WHILE @@Fetch_Status = 0
BEGIN
INSERT INTO #compressed_table_report
EXEC sp_estimate_data_compression_savings
@schema_name = 'dbo', -- Business Central use the dbo schema
@object_name = @table_name,
@index_id = NULL,
@partition_number = NULL,
@data_compression = 'PAGE'
;

FETCH NEXT FROM tables_cur INTO @table_name
END;

CLOSE tables_cur;
DEALLOCATE tables_cur;

SELECT table_name
, avg(size_with_current_compression_setting) as avg_size_with_current_compression_setting_KB
, avg(size_with_requested_compression_setting) as avg_size_with_requested_compression_setting_KB
, avg(size_with_current_compression_setting - size_with_requested_compression_setting) AS avg_size_saving_KB
FROM #compressed_table_report
GROUP BY table_name
ORDER BY avg_size_saving_KB DESC
;

DROP TABLE #compressed_table_report
;

END
SET NOCOUNT OFF
GO
```

When running the stored procedure, do this

```SQL
USE <tenant database> // change to your database
GO

EXEC estimate_page_compressed_table_sizes
GO
```


## Next steps

- [Using table partitioning and data compression in Business Central](./using-sql-partitioning-and-compression.md)
- [Check prerequisites](cloud-migration-prerequisites.md)
- [Optimizing cloud migration performance](migration-optimize-replication.md)
- [Run data migration setup](migration-setup.md)
4 changes: 4 additions & 0 deletions dev-itpro/administration/cloud-migration-plan-prepare.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,10 @@ In certain circumstances, you may not want to migrate all data. Here are a coupl
For more information, see [FAQ about Migrating to Business Central Online from On-Premises Solutions](faq-migrate-data.md) and [Troubleshooting Cloud Migration](migration-troubleshooting.md).

## Estimate the data size in your [!INCLUDE[prod_short](../includes/prod_short.md)] online tenant
In the online version of [!INCLUDE[prod_short](../developer/includes/prod_short.md)], data is compressed using the SQL Server data compression feature. This means that the data size in your on-premises database might not match the data size when migrated to the [!INCLUDE[prod_short](../developer/includes/prod_short.md)] service. For more information on estimating the compressed size of your data, see [Estimating the data size in your Business Central online tenant](./cloud-migration-estimate-compressed-data-size.md).


## Define migration strategy

<!--Ideally, to ensure all data is migrated, you'd stop all users from working in the on-premises environment while you made the migration to online. However, given the time it takes to complete the migration, this downtime typically isn't feasible. So, you want to devise a strategy that provides a stable migration, limits downtime for users, and results in no data loss. There's no exact approach to follow, because of the unknowns that can arise. But in general, the following approach provides a solid basis or starting point for most migrations.-->
Expand Down
11 changes: 7 additions & 4 deletions dev-itpro/administration/migrate-business-central-on-premises.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,10 @@ The cloud migration process transfers business data from one or more companies i
> [!NOTE]
> **What data isn't migrated?** During the data migration process, [!INCLUDE[prod_short](../developer/includes/prod_short.md)] doesn't migrate most system tables, users, and permissions. Additionally, record links aren't currently migrated because they are associated with a user ID, and users aren't migrated from the on-premises environment to the online tenant.
In general, data is migrated table by table. Depending on their size, tables may also be combined and migrated together for performance reasons. In either case, the success and failure of the migration is tracked for each table. For instance, tables fail to migrate if they can't be found, or if the schema doesn't match between the cloud and the on-premises tables. If a table fails to migrate, the error will be captured, and the migration moves on to the next table until completed.
In general, data is migrated table by table. Depending on their size, tables might also be combined and migrated together for performance reasons. In either case, the success and failure of the migration is tracked for each table. For instance, tables fail to migrate if they can't be found, or if the schema doesn't match between the cloud and the on-premises tables. If a table fails to migrate, the error will be captured, and the migration moves on to the next table until completed.

> [!NOTE]
> In [!INCLUDE[prod_short](../developer/includes/prod_short.md)] online, data is compressed using the SQL Server data compression feature. As a consequence, the data size in your on-premises database might not match the data size when migrated to the [!INCLUDE[prod_short](../developer/includes/prod_short.md)] service. For more information about estimating the compressed size of your data, go to [Estimating data size in your Business Central online tenant](./cloud-migration-estimate-compressed-data-size.md).
Data migration can be run multiple times. The data migration time varies depending on factors such as the amount of data to migrate, your SQL Server configuration, and your connection speeds. The initial migration takes the longest amount of time to complete because all data is migrating. After the initial migration, only changes in data will be migrated, resulting in faster iterations. It's not necessary to run the migration process more than once. But if users are still using the on-premises system, you must run at least one more migration to ensure all data is moved to the cloud before transacting in [!INCLUDE [prod_short](../includes/prod_short.md)] online.

Expand Down Expand Up @@ -165,13 +168,13 @@ This section outlines the general process or phases you go through to migrate da

This step migrates data from on-premises to online. It starts when you run the **Run data replication** assisted setup guide in [!INCLUDE [prod_short](../includes/prod_short.md)] online. At the end of the process, you have a copy of the on-premises data in the relevant [!INCLUDE [prod_short](../includes/prod_short.md)] online environment.

At this point in the process, you can verify whether the migration went well or not, fix any problems, and rerun the replication multiple times if you want to. For example, suppose you've run the assisted setup guide from a test company in a sandbox environment because you worry that many extensions might be problematic. Once the data has been replicated to the sandbox environment, you can use the troubleshooting tools in the [!INCLUDE [prodadmincenter](../developer/includes/prodadmincenter.md)].
At this point in the process, you can verify whether the migration went well or not, fix any problems, and rerun the replication multiple times if you want to. For example, suppose you ran the assisted setup guide from a test company in a sandbox environment because you worry that many extensions might be problematic. Once the data has been replicated to the sandbox environment, you can use the troubleshooting tools in the [!INCLUDE [prodadmincenter](../developer/includes/prodadmincenter.md)].

To get started, go to [Replicate data](migration-data-replication.md).

1. Data upgrade

After data replication is complete, the cloud migration may have the status *Upgrade Pending* on the **Cloud Migration Management** page. Data upgrade is typically required when migrating from Business Central version that is earlier than the version used on the target online environment. During data upgrade, the logic required upgrade the platform-related data in database is run. This step starts when you choose the **Run Data Upgrade Now** action in the **Cloud Migration Management** page in [!INCLUDE [prod_short](../includes/prod_short.md)] online for the specific environment.
After data replication is complete, the cloud migration might have the status *Upgrade Pending* on the **Cloud Migration Management** page. Data upgrade is typically required when migrating from Business Central version that is earlier than the version used on the target online environment. During data upgrade, the logic required upgrade the platform-related data in database is run. This step starts when you choose the **Run Data Upgrade Now** action in the **Cloud Migration Management** page in [!INCLUDE [prod_short](../includes/prod_short.md)] online for the specific environment.

<!--Once you have chosen this action, both the **Run Migration Now** and the **Run Data Upgrade Now** action can no longer be used for this company in the environment. If the upgrade has failed, an automatic point-in-time restore is run to revert the tenant to the point before upgrade. You can then fix the errors and try the upgrade again. Alternatively, you can start the cloud migration in another environment, or you can restore the current environment from a backup from a point in time before the data upgrade. Or, delete all companies in the current environment and start the migration again.-->

Expand Down Expand Up @@ -221,7 +224,7 @@ To make setting up this read-only tenant more efficient, we created the *Intelli
> [!NOTE]
> Before you configure a connection from on-premises to [!INCLUDE [prod_short](../developer/includes/prod_short.md)] online, make sure that at least one user in each company is assigned SUPER permissions.
Users that are reassigned to the *Intelligent Cloud* user group will have access to read ALL data by default. If you need to further restrict what data a user should be able to read, the SUPER user may create new user groups and permissions sets and assign users accordingly. It's highly recommended to create any new permissions sets from a copy of the *Intelligent Cloud* permission set and then take away permissions you don't want users to have.
Users that are reassigned to the *Intelligent Cloud* user group have access to read ALL data by default. If you need to further restrict what data a user should be able to read, the SUPER user can create new user groups and permissions sets and assign users accordingly. It's highly recommended to create any new permissions sets from a copy of the *Intelligent Cloud* permission set and then take away permissions you don't want users to have.

> [!WARNING]
> If you grant insert, modify or delete permissions to any resource in the application that was set to read-only, it could have a negative impact on the data in [!INCLUDE[prod_short](../developer/includes/prod_short.md)] online. If this occurs, you may have to clear all your data and rerun a full migration to correct this.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Analyzing Environment Validation Telemetry
title: Analyzing environment validation telemetry
description: Learn about the environment validation telemetry in Business Central.
author: KennieNP
ms.author: kepontop
Expand All @@ -9,17 +9,18 @@ ms.date: 10/27/2023
ms.custom: bap-template
ms.service: dynamics365-business-central
---
# Analyzing Environment Validation Telemetry

# Analyzing environment validation telemetry

[!INCLUDE[component](../developer/includes/online_only.md)]

[!INCLUDE[azure-ad-to-microsoft-entra-id](~/../shared-content/shared/azure-ad-to-microsoft-entra-id.md)]

Non-compatible, partner apps (per-tenant extensions) can block upgrades to next major version of [!INCLUDE[prod_short](../developer/includes/prod_short.md)] if they can't compile on that version. The [!INCLUDE[prod_short](../developer/includes/prod_short.md)] service proactively validates all environments regularly against the next release. If an environment isn't ready to be updated, then it sends an email to the administrator and, starting from September 2023, emits telemetry on these validations.
Non-compatible, partner apps (per-tenant extensions) can block upgrades to next major version of [!INCLUDE[prod_short](../developer/includes/prod_short.md)] if they can't compile on that version. The [!INCLUDE[prod_short](../developer/includes/prod_short.md)] service proactively validates all environments regularly against the next release. If an environment isn't ready to be updated, then the [!INCLUDE[prod_short](../developer/includes/prod_short.md)] service sends an email to the administrator and, starting from September 2023, emits telemetry on these validations.

With this telemetry, partners can monitor environments for the customers and setup alerts so that they know up front which customers will need help prior to updating to the next major version of [!INCLUDE[prod_short](../developer/includes/prod_short.md)].
With this telemetry, partners can monitor environments for the customers and setup alerts so that they know up front which customers need help prior to updating to the next major version of [!INCLUDE[prod_short](../developer/includes/prod_short.md)].

Failed operations result in a trace log entry that includes a reason for the failure.
Failed operations result in a tracelog entry that includes a reason for the failure.

The validation flow is as follows:

Expand Down Expand Up @@ -69,7 +70,7 @@ Occurs for each extension in the environment.

|Dimension|Description or value|
|---------|-----|
|message|**Extension validation started: extension {extensionName} version {extensionVersion} by {extensionPublisher} ({extensionId})** <br /><br /> `{extensionName}` indicates the name of the extension.<br /><br /> `{extensionVersion}` indicates the version of the extension.<br /><br /> `{extensionPublisher}` indicates the publisher of the extension.<br /><br /> `{extensionId}` indicates the id of the extension.|
|message|**Extension validation started: extension {extensionName} version {extensionVersion} by {extensionPublisher} ({extensionId})** <br /><br /> `{extensionName}` indicates the name of the extension.<br /><br /> `{extensionVersion}` indicates the version of the extension.<br /><br /> `{extensionPublisher}` indicates the publisher of the extension.<br /><br /> `{extensionId}` indicates the ID of the extension.|

### Custom dimensions

Expand Down Expand Up @@ -112,7 +113,7 @@ Occurs if the extension validated successfully on the next major of [!INCLUDE[pr

## <a name="extension-validation-diagnostic-reported"></a>Extension Validation diagnostic reported (LC0210)

Occurs if something was not right when validating the extension on the next major of [!INCLUDE[prod_short](../developer/includes/prod_short.md)].
Occurs if something wasn't right when validating the extension on the next major of [!INCLUDE[prod_short](../developer/includes/prod_short.md)].

### General dimensions

Expand Down Expand Up @@ -293,7 +294,7 @@ Occurs if an extension validated with a diagnostic on the next major of [!INCLUD
|extensionId|[!INCLUDE[extensionId](../includes/include-telemetry-dimension-extension-id.md)]|
|extensionPublisher|[!INCLUDE[extensionPublisher](../includes/include-telemetry-dimension-extension-publisher.md)]|
|extensionVersion|[!INCLUDE[extensionPublisher](../includes/include-telemetry-dimension-extension-version.md)]|
|mainExtension|Specifies the name of an extension that the validated extension has taken a dependancy n.|
|mainExtension|Specifies the name of an extension that the validated extension has taken a dependency n.|
|diagnosticCode|[!INCLUDE[diagnosticCode](../includes/include-telemetry-dimension-diagnostics-code.md)]|
|diagnosticMessage|[!INCLUDE[diagnosticMessage](../includes/include-telemetry-dimension-diagnostics-message.md)]|
|diagnosticSeverity|[!INCLUDE[diagnosticSeverity](../includes/include-telemetry-dimension-diagnostics-severity.md)]|
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,9 @@ However, extra CPU resources are required on the database server to compress and
With the **CompressionType** property, you can configure row or page type compression or configure the table not to use compression. With these compression settings, [!INCLUDE[prod_short](../developer/includes/prod_short.md)] table synchronization process will make changes to the SQL Server table, overwriting the current compression type, if any. You can choose to control data compression directly on SQL Server by setting the **CompressionType** property to **Unspecified**, in which case table synchronization process won't control the data compression.

> [!NOTE]
> In the online version of [!INCLUDE[prod_short](../developer/includes/prod_short.md)], tables are compressed with **CompressionType** set to **Page**.

To evaluate whether a table is a good candidate to compress, you can use the stored procedure `sp_estimate_data_compression_savings` in SQL Server. For more information, see [sp_estimate_data_compression_savings (Transact-SQL)](/sql/relational-databases/system-stored-procedures/sp-estimate-data-compression-savings-transact-sql).

Because SQL Server supports data compression on the partition level, you can combine SQL Server data compression with table partitioning to achieve flexible data archiving on historical parts of a large table, without having the CPU overhead on the active part of the table.
Expand Down
Loading

0 comments on commit 2f346f7

Please sign in to comment.