[OVSDB-140] When large JSON responses are received by the switch in response to ODL queries, ODL's OVSDB interface becomes unusable. Created: 18/Feb/15  Updated: 19/Oct/17  Resolved: 12/Jan/16

Status: Resolved
Project: ovsdb
Component/s: API
Affects Version/s: unspecified
Fix Version/s: None

Type: Bug
Reporter: Jim West Assignee: Unassigned
Resolution: Duplicate Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Operating System: All
Platform: All


Issue Links:
Duplicate
duplicates OVSDB-134 Too large configuration file from OVS Resolved
External issue ID: 2732

 Description   

A little more detail:

  • It appears to me that the ODL OVSDB subsystem sends an initial query to the switch when an OVSDB connection comes up
  • If the response to this query is 'too large' (current 100,000 bytes), ODL closes the connection.
  • If the switch is configured to open the OVSDB connection, this process repeats indefinitely and the OVSDB subsystem isn't usable (at least for that switch)

A lot of detail:

Running code that I built myself from the stable/helium branch (more or less at stable helium 2).

I had one switch in my system with A LOT of ovsdb rows and I hit this problem

  • My switch is configured to connect to my controller when the controller starts up.
  • My controller starts and the switch connects
  • The OVSDB subsystem begins pulling information from my switch
  • My controller closes the OVSDB connection
  • My switch attempts to reconnect
  • repeat forever

I eventually was able to set a break point and get a stack trace. The problem is in
ovsdb/library/src/main/java/org/opendaylight/ovsdb/lib/jsonrpc/JsonRpcDecoder.java around line 116 (in the decode(ChannelHandlerContext ctx, ByteBuf buf, List<Object> out) method)

for (; i < buf.writerIndex(); i++) {
switch (buf.getByte) {
case '

{': if (!inS) leftCurlies++; break; case '}

':
if (!inS) rightCurlies++;
break;
case '"':

{ if (buf.getByte(i - 1) != '\\') inS = !inS; break; }

}

if (leftCurlies != 0 && leftCurlies == rightCurlies && !inS)

{ ByteBuf slice = buf.readSlice(1 + i - buf.readerIndex()); JsonParser jp = jacksonJsonFactory.createParser(new ByteBufInputStream(slice)); JsonNode root = jp.readValueAsTree(); out.add(root); leftCurlies = rightCurlies = lastRecordBytes = 0; recordsRead++; break; }

if (i - buf.readerIndex() >= maxFrameLength)

{ fail(ctx, i - buf.readerIndex()); }

...

private void fail(ChannelHandlerContext ctx, long frameLength) {
logger.error("JSON too big. JSON content exceeded limit of {} bytes", maxFrameLength);
if (frameLength > 0)

{ ctx.fireExceptionCaught( new TooLongFrameException( "frame length exceeds " + maxFrameLength + ": " + frameLength + " - discarded")); }

else

{ ctx.fireExceptionCaught( new TooLongFrameException( "frame length exceeds " + maxFrameLength + " - discarding")); }

}



 Comments   
Comment by Sam Hague [ 12/Jan/16 ]

We should add a config param to use a value different than 100k for the buffer size.

Generated at Wed Feb 07 20:35:36 UTC 2024 using Jira 8.20.10#820010-sha1:ace47f9899e9ee25d7157d59aa17ab06aee30d3d.