Result=execute.start_process(cmd2,callback=liveOUTPUT,validator="Hash of data verified.The recommended approach to invoking subprocesses is to use the run()įunction for all use cases it can handle. Self.process = subprocess.Popen(cmd_list,stdout=subprocess.PIPE,stderr=subprocess.PIPE,shell=True) If self.validator in line: print("Valid") self.result=Trueĭef start_process(self,cmd_list,callback=None,validator=None,timeout=None): While () is None and (time.time() - start) < self.timeout: The only thing I haven't solved is why this works perfectly for log messages but I see some print messages show up later and all at once.Ĭmd='esptool.py -chip esp8266 write_flash -z 0x1000 /home/pi/zero2/fw/base/boot_40m.bin'Ĭmd2='esptool.py -chip esp32 -b 115200 write_flash -z 0x1000 /home/pi/zero2/fw/test.bin'Ĭmd3='esptool.py -chip esp32 -b 115200 erase_flash' I'm sure there is overhead being added here but it is not a concern in my case. Stdout_str = "".join(stdout_parts) # Just to demo # Not sure if this is needed, but run it again just to be sure we got it all? # For example, check a multiprocessor.Value('b') to proc.kill() # The main loop while subprocess is running # fcntl makes readline non-blocking so it raises an IOError when emptyįor s in iter(proc_stream.readline, ''): # replace '' with b'' for Python 3 """A little inline function to handle the stdout business. # Make stdout non-blocking when using read/readlineįl = fcntl.fcntl(proc_stdout, fcntl.F_GETFL)įcntl.fcntl(proc_stdout, fcntl.F_SETFL, fl | os.O_NONBLOCK)ĭef handle_stdout(proc_stream, my_buffer, echo_streams=True, log_file=None): Proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) Also, feedback very much welcome! import time Hopefully I didn't ruin it in the copy and paste.
#PYTHON SUBPROCESS GET OUTPUT IN TMUX CODE#
I'm stripping back a lot of exception and such here so this is based on code that works in production. I don't want to use threads just for output gathering as I want as many available as possible for other things (a pool of 20 processes would be using 40 threads just to run 1 for the process thread and 1 for stdout.and more if you want stderr I guess) (In Python2.7, but this should work in newer 3.x as well). stream output, write to a log file and return a string copy of the output.Ī little background: I am using a ThreadPoolExecutor to manage a pool of threads, each launching a subprocess and running them concurrency. Use PIPE as I needed to do multiple things, e.g.Non-blocking as I need to check for other things going on.No threads for stdout (no Queues, etc, either).None of the answers here addressed all of my needs. Self.capture_output = threading.Thread(target=self.output_reader)įor line in iter(, b''): Only the last result is kept, all other output is discarded, hence prevents the PIPE from growing out of memory: import subprocess This PoC constantly reads the output from a process and can be accessed when needed. Note that in my use case, an external process kills the process that we Popen().
I just wanted to share this, as I ended up on this question trying to do something similar, but none of the answers solved my problem. Raise subprocess.CalledProcessError(returnCode, command) Print "Waiting for async readers to finish." # Sleep for a short time to avoid excessive CPU use while waiting for data. # Process all available lines from the stderr Queue. # Process all available lines from the stdout Queue. While not stdoutReader.eof() or not stderrReader.eof(): # Keep checking queues until there is no more output.
(stderrReader, stderrQueue) = AsyncLineReader.getForFd(process.stderr) (stdoutReader, stdoutQueue) = AsyncLineReader.getForFd(process.stdout) Process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) Return not self.is_alive() and getForFd(cls, fd, start=True): In case someone wants to read from both stdout and stderr at the same time using threads, this is what I came up with: import threadingĪssert isinstance(outputQueue, Queue.Queue)