I'm setting up a pipeline in Data Factory and use On-Prem Data Gateway connection and I'm getting an error that the data would be truncated even though it's smaller than the field length:
Copy Command operation failed with error ''String or binary data would be truncated while reading column of type 'VARCHAR(50)'.
Here is the line it's erroring out on:
column 'LastName'. Truncated value: 'Meunier (ミュニエ・ã�'.
How do I get around this problem? Since it's OnPrem it's making me use a staging environment, which I setup a blob in Azure for this. Obviously cleaning up the data fixes it, but there will always be dirty data in the future. Any suggestions?