[ERROR]Unable to advance iterator for node with id ‘0’ for Kudu table ‘impala::data_dim’: Network error | Big Data | Cloudera | Hadoop

In this article, we will explain how to resolve the “Unable to advance iterator for node with id ‘0’ for Kudu table ‘impala::data_dim’: Network error” . Here we provided simple solution.



Complete Full ERROR:

ERROR: Unable to advance iterator for node with id ‘0’ for Kudu table ‘impala::data_dim’: Network error: recieving error from unknown peer: Transport endpoint is not connected (error 107)
[Cloudera][ImpalaJDBCDriver](500051) ERROR processing query/statement. Error Code: 0, SQL state: Unable to advance iterator for node with id ‘0’ for Kudu table ‘data_dim’: Not found: Scanner not found (it may have expired)

Resolution:

01:11:48.13022 821538 connection.cc:664] server connection from 10.01.121.24:13022 recv error: Network error: RPC frame had a length of xxxxxx, but we only support messages up to xxxx bytes long.
01:11:48.13024 821538 connection.cc:295] Shutting down server connection from 10.01.121.24:13022 with pending inbound data (4/xxxxxx bytes received, last active 0 ns ago, status=Network error: RPC frame had a length of xxxxxx, but we only support messages up to xxxx bytes long.)
01:11:48.13025 821538 connection.cc:664] server connection from 10.01.121.25:13025 recv error: Network error: RPC frame had a length of xxxxxx, but we only support messages up to xxxx bytes long.
01:11:48.13027 821538 connection.cc:295] Shutting down server connection from 10.01.121.33:13025 with pending inbound data (4/xxxxxx bytes received, last active 0 ns ago, status=Network error: RPC frame had a length of xxxxxx, but we only support messages up to xxxx bytes long.)
01:11:48.13029 821540 connection.cc:664] server connection from 10.01.121.26:12564 recv error: Network error

 

Solution:





This error belongs to Kudu service unable to create table in the Big Data cluster.
Step 1 : We have restarted the Kudu service in the Cloudera manager

After that still getting the same error and we have modified some of the configuration in the Kudu services.

Step 2 : First login with Admin privileges in the Cloudera manger then check the Kudu service whether it’s running good or not.

Step 3 : If it’s running good, then try to change the below configurations:

I) Cloudera Manager –> Kudu –> Configuration –>  Kudu Service Advanced Configuration Snippet and then add the below properties file in the configurations:

--unlock_experimental_flags
--codegen_queue_capacity=200

Once changes has been done then restart the Kudu service in the Cloudera manager.

After restarted successfully then check the Kudu table query execute and monitor it for few more days. Whether Kudu service is stable or not.