<!-- 
RSS generated by JIRA (8.20.10#820010-sha1:ace47f9899e9ee25d7157d59aa17ab06aee30d3d) at Wed Feb 07 19:52:36 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>OpenDaylight JIRA</title>
    <link>https://jira.opendaylight.org</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>8.20.10</version>
        <build-number>820010</build-number>
        <build-date>22-06-2022</build-date>
    </build-info>


<item>
            <title>[CONTROLLER-263] MDSAL Notification pool threads not getting spawned besides the core threads</title>
                <link>https://jira.opendaylight.org/browse/CONTROLLER-263</link>
                <project id="10113" key="CONTROLLER">controller</project>
                    <description>&lt;p&gt;MDSAL Default Notification pool is currently configured as follows:&lt;br/&gt;
Core threads: 4&lt;br/&gt;
Max threads : 32&lt;br/&gt;
Queue: Unbounded Linkedblockingqueue&lt;/p&gt;

&lt;p&gt;During cbench tests, its observed the # of threads working on the queue, dont ever go beyond 4. &lt;br/&gt;
It should ideally hit 32, as the notification Q is not getting drained at a faster rate.&lt;/p&gt;</description>
                <environment>&lt;p&gt;Operating System: Mac OS&lt;br/&gt;
Platform: Macintosh&lt;/p&gt;</environment>
        <key id="24817">CONTROLLER-263</key>
            <summary>MDSAL Notification pool threads not getting spawned besides the core threads</summary>
                <type id="10104" iconUrl="https://jira.opendaylight.org/secure/viewavatar?size=xsmall&amp;avatarId=10303&amp;avatarType=issuetype">Bug</type>
                                                <status id="5" iconUrl="https://jira.opendaylight.org/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="green"/>
                                    <resolution id="10000">Done</resolution>
                                        <assignee username="kramesha@cisco.com">Kamal Rameshan</assignee>
                                    <reporter username="kramesha@cisco.com">Kamal Rameshan</reporter>
                        <labels>
                    </labels>
                <created>Tue, 1 Apr 2014 22:47:48 +0000</created>
                <updated>Tue, 25 Jul 2023 08:23:22 +0000</updated>
                            <resolved>Wed, 23 Apr 2014 21:47:15 +0000</resolved>
                                                                    <component>mdsal</component>
                        <due></due>
                            <votes>0</votes>
                                    <watches>3</watches>
                                                                                                                <comments>
                            <comment id="47877" author="kramesha@cisco.com" created="Tue, 1 Apr 2014 22:50:24 +0000"  >&lt;p&gt;The threadpoolexecutor does not create new threads if the queue is not full.&lt;/p&gt;

&lt;p&gt;Since the notification queue is unbounded, the above condition is never met and hence no new threads are spawned (besides the 4 core ones)&lt;/p&gt;</comment>
                            <comment id="47878" author="rovarga" created="Wed, 2 Apr 2014 11:54:41 +0000"  >&lt;p&gt;I am not sure whether having a unbounded queue is the correct behavior here.&lt;/p&gt;</comment>
                            <comment id="47879" author="rovarga" created="Wed, 2 Apr 2014 12:14:42 +0000"  >&lt;p&gt;Sorry for incomplete comment.&lt;/p&gt;

&lt;p&gt;While I see the problem here, and the fix makes for some better throughput, I am afraid we are not introducing a backpressure mechanism and at some point we&apos;ll see producers overwhelm consumers, creating a huge backlog.&lt;/p&gt;

&lt;p&gt;I think we need a more full solution, probably create a per-producer queue and some sort of fair queue dispatch threadpool, which would:&lt;/p&gt;

&lt;p&gt;1) prevent a single producer from monopolizing the notification dispatch threadpool&lt;br/&gt;
2) apply backpressure to heavy producers, such that we don&apos;t end up with huge backlogs&lt;/p&gt;

&lt;p&gt;I am thinking something along the lines of token buckets.&lt;/p&gt;

&lt;p&gt;Tony: where would be a good place to place such a rate-limiting mechanism?&lt;/p&gt;

&lt;p&gt;Kamal: can you introduce a knob, which would place a limit on the queue? ThreadPoolExecutor.CallerRunsPolicy looks like a good first cut at the rejection policy in that case.&lt;/p&gt;</comment>
                            <comment id="47880" author="kramesha@cisco.com" created="Wed, 2 Apr 2014 17:58:36 +0000"  >&lt;p&gt;The patch is to fix a bug in the current design. &lt;/p&gt;

&lt;p&gt;I am in total agreement that an unbounded queue is not the approach we want to take as an OOM error is not far off.&lt;/p&gt;

&lt;p&gt;Rather than creating a queue/producer, my take would be to put a RateLimiter(com.google.common.util.concurrent) for each producer. &lt;br/&gt;
And the queue could be a bounded, possibly with a higher number.&lt;/p&gt;

&lt;p&gt;What would be the throttling logic is something we need to come up with.&lt;/p&gt;

&lt;p&gt;I can take up this as an enhancement, if we agree on the design.&lt;/p&gt;

&lt;p&gt;But for now, i think we should have a fix for the threadpool. Currently its broken.&lt;/p&gt;</comment>
                            <comment id="47881" author="kramesha@cisco.com" created="Thu, 3 Apr 2014 18:42:17 +0000"  >
&lt;p&gt;Working on a patch to implement CallerRunsPolicy or any other way to throttle.&lt;/p&gt;

&lt;p&gt;Will run some cbench tests to see if it impacts significantly.&lt;/p&gt;</comment>
                            <comment id="47882" author="kramesha@cisco.com" created="Mon, 7 Apr 2014 15:52:46 +0000"  >&lt;p&gt;CallerRunsPolicy implementation is causing thread boundaries to be violated and also having performance issues.&lt;/p&gt;

&lt;p&gt;Will raise a separate bug as &apos;enhancement&apos; to introduce a throttle as Robert suggested.&lt;/p&gt;

&lt;p&gt;For now just fixing the patch for the service release: &lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://git.opendaylight.org/gerrit/#/c/5857/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://git.opendaylight.org/gerrit/#/c/5857/&lt;/a&gt;&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                            <customfield id="customfield_11400" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                        <customfield id="customfield_10208" key="com.atlassian.jira.plugin.system.customfieldtypes:textfield">
                        <customfieldname>External issue ID</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>645</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10201" key="com.atlassian.jira.plugin.system.customfieldtypes:url">
                        <customfieldname>External issue URL</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue><![CDATA[https://bugs.opendaylight.org/show_bug.cgi?id=645]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    <customfield id="customfield_10202" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Priority</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10301"><![CDATA[Normal]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10000" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>0|i02jbj:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                </customfields>
    </item>
</channel>
</rss>