Discussion:
[Rtai] raise rt_sem_signal() efficiency.
Guo Yunfei
2018-06-17 08:15:53 UTC
Permalink
Hi, everyone:


I'm face the problem of rt_sem_signal() low efficiency.

I register my PCI interrupt handler by:
rt_request_linux_irq(pdev->irq, my_isr, "my_pci_device", dev);

the interrupt handler code is:

static irqreturn_t my_isr(int irq, void *data)
{
struct my_device *dev = data;

unsigned short LINTSR; //LINTSR:local interrupt status register;
int iIntSource = 0;
LINTSR = readw(dev->caddr +0x28); // read pci card interrupt status

if (LINTSR & 0x1) {
iIntSource = ReadIntSource(); //check the interrupt source.
if((iIntSource & 0x1) == 0x1) {
clearInterruptFunc();
shm_ptr->IntType = 1;//Flag_ISR;
}

volatile RTIME t1, t2, t_elapsed; //check the rt_sem_signal()
elapse time
t1 = rt_get_cpu_time_ns();
rt_sem_signal(&sem);
t2 = rt_get_cpu_time_ns();
t_elapsed = (t2 - t1);

//printk(" %d \n",t_elapsed);
writew(LINTSR | 0xF,dev->caddr+0x28); //clear interrupt ;
}
return IRQ_HANDLED;
}

Then, I implement the real isr work in user space LXRT method like below,
for I want enjoy the convinience of user space driver developing.

static int Answer_irq()
{
if (!(irqtask = rt_thread_init(nam2num("ANSWERIRQTSK"), 0, 0,
SCHED_FIFO, 0x0))) {
printf("CANNOT INIT PROCESS ANSWERIRQTSK\n");
exit(1);
}
mlockall(MCL_CURRENT | MCL_FUTURE);

rt_make_hard_real_time();
while (1) {
rt_sem_wait(sem);
switch (shm_ptr->IntType)
{
case 1:
REAL_ISR(); //the real isr working here.
break;
return 0;
}
}
rt_make_soft_real_time();
rt_task_delete(irqtask);
return 0;
}

This mechanism can working smooth, but,its efficiency is much lower.
around 33% slower than that directly runing the REAL_ISR() in kernel space
interrupt handler(codes as below)

static irqreturn_t my_isr(int irq, void *data)
{
struct my_device *dev = data;

unsigned short LINTSR; //LINTSR:local interrupt status register;
int iIntSource = 0;
LINTSR = readw(dev->caddr +0x28); // read pci card interrupt status

if (LINTSR & 0x1) {
iIntSource = ReadIntSource(); //check the interrupt source.
if((iIntSource & 0x1) == 0x1) {
clearInterruptFunc();
REAL_ISR(); // real isr working in interrupt handler
}

writew(LINTSR | 0xF,dev->caddr+0x28); //clear interrupt ;
}
return IRQ_HANDLED;
}

So, I suspect rt_sem_signal(&sem) give much delay in this mechanism.
my rt_sem_signal(&sem) elapse time dmesg segment like below:
[ 198.234668] 1738
[ 198.235488] 4988
[ 198.236308] 913
[ 198.237128] 8038
[ 198.237945] 1625
[ 198.238764] 5038
[ 198.239584] 912
[ 198.240404] 3750
[ 198.241222] 1575
[ 198.242042] 5275
[ 198.242866] 2025
[ 198.243690] 11976
[ 198.244498] 1662
[ 198.245317] 5125
[ 198.246137] 875
[ 198.246959] 8201
[ 198.247774] 1612
[ 198.248593] 8351
[ 198.249413] 725
[ 198.250234] 7714
[ 198.251051] 1625
[ 198.251871] 5125
[ 198.252691] 1350
[ 198.253513] 8087
[ 198.254329] 1775
[ 198.255147] 8662
[ 198.255966] 813
[ 198.256787] 7387
[ 198.257604] 1525
[ 198.258429] 4725
[ 198.259266] 15699
[ 198.260064] 5362
[ 198.260882] 1562
[ 198.261707] 4975
[ 198.262540] 15750
[ 198.263346] 6388
[ 198.264160] 2063
[ 198.264987] 2337
[ 198.265822] 16974
[ 198.266625] 6350
[ 198.267442] 2475
[ 198.268268] 2350
[ 198.269084] 11675

Is this normal? (My platform is Intel® Celeron N3160,1.6 GHz (Quad-Core).)
Or, any suggestion for my problem to raise its IPC efficiency?


Thanks in advance.

Guo Yunfei

Loading...