Jump to content
OMRON Forums

Triggered time base vs DIO monitoring


andyf
 Share

Recommended Posts

I am using external time base. I am looking at different ways to signal my motion program to start a move. I can generate a single pulse from a timing card at the precise time I want the motion program to start the move.

 

The first method is to receive the start trigger on an input of the ACC14E DIO card. Thus my motion program monitors a variable mapped to that input. When the variable value goes active, the motion program executes the move.

 

The second method is to use the Triggered Time Base feature of the Delta Tau. I have setup and tested this method as explained by the users manual.

 

My question is...Does using Triggered Time Base actually result in the move starting more precisely than the DIO approach? I have run tests on both methods but cannot conclude that there is a difference (it may just be that it is too small, and lost in the jitter of the start trigger transmission).

 

Based on what the users manual says, using Trigger Time Base will latch the external time base encoder position at the instant the trigger is received. Does this mean DeltaPos would be more accurate under this method than the DIO method for the servo cycle at which the start trigger is received? Thus perhaps there indeed is a very small difference between the two methods.

 

Link to comment
Share on other sites

  • Replies 2
  • Created
  • Last Reply

Top Posters In This Topic

Popular Days

Top Posters In This Topic

When you start from a motion program and dig IO you must have some loop structure in the program such as a while()wait. The motion program can only scan once per RTI (Real Time Interrupt based on i8) so this method can not give you a better start time than +/-1 RTI.

 

Triggered time base captures the start signal and latches an encoder position. Then when the next servo interrupt occures the ECT processes that the start signal occured and sets the timebase accordingly and sets an internal pointer based on the interpolated difference between when the start trigger occured and the current position of the master encoder. The result from my testing (20 years ago) was +/- 1 master encoder count dither. Since then each time a discrepency has occured, in the end the true cause has been seen to be something other than the accuracy of the triggered timebase

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
 Share


×
×
  • Create New...