Hi, please run the below query in your DB. It will show the top 50 tables with maximum data size.
then review the tables. You should go for custom tables first as you already know the schema and its dependency with other tables.
Typically, document(doucref), batch, database log, and DMF tables are good candidates for data cleanup as they have huge data with them.
This query should kick start you to reduce database size.
-- Top N largest tables by used space (GB)
-- Change @TopN as needed
DECLARE @TopN int = 50;
WITH TableSizes AS (
SELECT
s.name AS schema_name,
t.name AS table_name,
SUM(ps.row_count) AS row_count,
-- Sizes in MB
CAST(SUM(ps.reserved_page_count) * 8.0 / 1024 AS DECIMAL(19,2)) AS reserved_mb,
CAST(SUM(ps.used_page_count) * 8.0 / 1024 AS DECIMAL(19,2)) AS used_mb,
CAST((
SUM(ps.in_row_data_page_count)
+ SUM(ps.lob_used_page_count)
+ SUM(ps.row_overflow_used_page_count)
) * 8.0 / 1024 AS DECIMAL(19,2)) AS data_mb,
CAST((
SUM(ps.used_page_count)
- (SUM(ps.in_row_data_page_count)
+ SUM(ps.lob_used_page_count)
+ SUM(ps.row_overflow_used_page_count))
) * 8.0 / 1024 AS DECIMAL(19,2)) AS index_mb
FROM sys.dm_db_partition_stats AS ps
INNER JOIN sys.tables AS t ON t.object_id = ps.object_id
INNER JOIN sys.schemas AS s ON s.schema_id = t.schema_id
GROUP BY s.name, t.name
)
SELECT TOP (@TopN)
schema_name,
table_name,
row_count,
-- Convert to GB
CAST(reserved_mb / 1024.0 AS DECIMAL(19,2)) AS reserved_gb,
CAST(used_mb / 1024.0 AS DECIMAL(19,2)) AS used_gb,
CAST(data_mb / 1024.0 AS DECIMAL(19,2)) AS data_gb,
CAST(index_mb / 1024.0 AS DECIMAL(19,2)) AS index_gb
FROM TableSizes
ORDER BY used_gb DESC;